Programmers from Google Brain and Google Research are working on artificial intelligence based on machine learning Neural Radiance Fields for Unconstrained (NeRF). The purpose of this neural network is to create 3D animation from photographs.
In addition, the system removes bystanders from images, aligns exposure, tone and colors. This allows you to quickly create 3D renders where you can select viewpoints, adjust lights, and so on.
The 3D models of the Brandenburg Gate in Berlin, the Sacre Coeur Basilica in Paris and the Trevi Fountain in Rome have already been recreated. To do this, they only used photos from sites like Flickr. As a result, the researchers obtained detailed 3D renders of locations with the ability to manually select a point of view and change the lighting in the scene.
NeRF-W is based on NeRF, the original work of the researchers, which did the same, but only worked well under tightly defined control conditions. The technology builds depth-of-field maps using neural networks, and then synthesizes a volumetric scene using direct 3D rendering.
For correct lighting and post-processing, the authors used a low-poly projection: this not only allowed us to simulate lighting for a particular photo, but also to simulate the lighting of the scene from new angles. To remove foreign objects that could get into the frame, we used the construction of a secondary depth map, which made it possible to separate random objects from the necessary ones.
The creators of NeRF say that a lot of work still needs to be done on it, and especially on the quality of recognition. Because now he may not accept or make clumsy different pictures of the same place, taken from the same angle, and no one can understand why this is happening.