Create NeRFs with Nvidia Instant-NGP


Summary

Nvidia’s Instant-NGP creates NeRFs in seconds. A new version makes it easy to use even without coding knowledge.

Neural Radiance Fields (NeRF) can learn a 3D scene from dozens of photos and then render it photorealistically. Technology is a hot candidate for the next central visualization technology and is developed by artificial intelligence researchers and companies such as Google and Nvidia. Google uses NeRF for Immersive View, for example.

The technology arrived outside of research labs among photographers and other artists eager to experiment. But access remains strewn with pitfalls, often requiring knowledge of the code and high computing power.

Nvidia’s Instant NGP and open-source Nerfstudio make it easy to access

Tools such as the open source Nerfstudio toolkit attempt to simplify the process of creating NeRF, offering tutorials, a Python library, and a web interface. Among other things, Nerfstudio relies on an implementation of Instant-NGP (Instant Neural Graphics Primitives), a framework from Nvidia researchers that can learn NeRFs in seconds on a single GPU.

A d

Now the Instant-NGP team has released a new version that only requires one line of code for custom NeRFs – everything else is done through a simple executable file starting an interface.

Even better: if you just want to try out the included example or are ready to download another small file, you don’t need any code. Instant-NGP works on all Nvidia cards starting from the GTX-1000 series and requires CUDA 11.5 or higher on Windows.

Instant-NGP: First steps towards Fox NeRF

To use Instant-NGP to train the included fox example, all you need to do is load the Windows binary version of Instant-NGP that matches your Nvidia graphics card. Start the interface via instant-ngp.exe in the downloaded folder. Then you can drag the fox folder under data/nerve/ in the Instant-NGP window and start training. After a few seconds, a fox head should be visible on a wall.

You can now use the settings to create meshes, for example, or use the camera interface to plan a camera path and then render video. This is saved in the Instant NGP folder. EveryPoint’s Jonathan Stephens demonstrates how it works in his short video tutorial on YouTube.

How to turn your own video into NeRF

For a custom NeRF, Instant-NGP supports two approaches: COLMAP to create a dataset from a set of photos or videos you have taken, or Record3D to create a dataset using an iPhone 12 Pro or later (based on ARKit).

Recommendation

How to generate photorealistic images with DALL-E 2
How to generate photorealistic images with DALL-E 2

Both methods extract frames from videos and estimate the camera position for each frame in the training dataset, as this is necessary for NeRF training.

For those with no coding knowledge, two simple .bat files can be downloaded from Stephens’ GitHub InstantNGP batch – one for photos and one for videos. Simply drop the files into the Instant NGP folder and drag a video or images folder onto the appropriate .bat file.

This is what your folder should look like after loading the .bat files. | Image: Jonathan Stephens

For video, you should always set a value that determines how many frames per second of video footage will be extracted. The value should be between 150 and 300 frames for the entire video. A value of 2 will result in approximately 120 frames for a 60 second clip.

Cactus-NeRF sucks – but this is my first try

When the process is complete, the Instant NGP window opens and the training process starts automatically. You can now change workout settings as described above, enable DLSS, or render a video.

My first attempt is full of artifacts, which can probably be easily fixed – for example via an adjustment aabb_scale value which affects how far the NeRF implementation travels to track the rays. But the whole process, including the video recording, took me less than five minutes. I used an RTX 3060 Ti as GPU.

Leave a Comment

Your email address will not be published. Required fields are marked *