NeRF or how to create a 3D scene from few

NeRF or how to create a 3D scene from few photos: it’s magic! – Futura

After the revolution of generative AIs like ChatGPT or Midjourney, another type of AI is experiencing rapid development. Neural radiation fields, NeRF for short, make it possible to create a 3D scene from a few photos taken with a smartphone.

You will also be interested

[EN VIDÉO] Interview: How was artificial intelligence born? Artificial intelligence aims to mimic how the human brain works, or at least its logic…

Chatbots are not the only area where artificial intelligence is experiencing a real revolution. Neural Radiation Fields, or NeRF, are to 3D what Large Language Models (LLM) like GPT are to chatbots. It is a neural network capable of creating a three-dimensional scene from two-dimensional photos.

The NeRF technology is more than a few purely theoretical research papers. Several versions already exist. This allowed Google to create Immersive View, an impressive and highly detailed 3D view of several major cities.

By combining a NeRF network with a language model, it is possible to identify objects in a real scene. © University of California at Berkeley

A smartphone is enough to have a 3D scene

In concrete terms, all you have to do is take a few photos with your smartphone or a drone from different angles, or shoot a short video, and the NeRF system will generate the scene in 3D. It is possible to move the camera in the scene like in a video game, insert 3D objects into another scene or change the background of a video.

A designer has posted on Twitter how she used Luma AI to create a compensated tracking shot, a cinematic effect that conveys a sense of vertigo. The combination with other tools enables even more creativity. another video posted to Twitter shows using Stable Diffusion and EbSynth to turn a Buddha statue into gold and transform the entire landscape around it.

Neural radiation fields connect to other AIs

This technology is developing faster and faster. To give just a few examples, Mip-NeRF, a benchmark in this area, was released in 2021. Last year, Nvidia introduced its Instant NeRF, capable of creating a scene in milliseconds in Full HD. Researchers at Google just published Zip-NeRF, which is 22 times faster than Mip-NeRF.

The possible uses of this technology are also very numerous. The University of California at Berkeley combined a NeRF network with a language model to create LeRF, a system for identifying objects in a scene that could be useful in robotics. Specifically, they envision combining it with ChatGPT. So it would be enough to inform the AI ​​that someone spilled coffee, ChatGPT then generates the list of actions (get rag, clean products, rinse cloth…) and the robot could easily identify the stain thanks to LeRF. A team in Japan is working on rendering a scene in real-time using the Unreal Engine game engine, which would make it possible, for example, to turn his garden into a level of a game. Another team in Singapore has managed to use an AI called HOSNeRF capable of transforming a video so that it can be viewed from any angle, rather than being limited to static scenes.

The enthusiasm for neural radiation fields is certain, and the general public is likely to come across this technology more and more often. It is expected to revolutionize cinema, photography and 3D in the years to come.