Over the last six months, we have seen more and more papers come in that will help power real time rendering of NeRFs and Radiance Fields.
Until the recent introduction of Zip-NeRF, it was difficult to achieve the quality of state of the art methods, such as Mip NeRF 360 or RefNeRF, with fast training times and high FPS. Even with Zip-NeRF the training time is right around an hour.
However, with the introduction the 3D Gaussian Splatting, that trade off is not necessary. They are able to achieve real time rendering with 100+ frame rates at 1080p.
The amount of detail that can be regained from this method is stunning as seen from the examples below.
In order to facilitate this, they employ three separate steps starting with representing the scene using 3D Gaussians to keep the necessary properties of the scene and cutting out unnecessary voids of data. Second, they optimize these 3D shapes for a more accurate scene. Finally, they create a fast rendering method that improves training speed and allows for real-time rendering.
The authors went on to demonstrate their method on several existing datasets below.
It's easy to see the quality improvements from both the foreground, subject, and background of each NeRF scene.
The code has not yet been published, but has been marked as coming soon. Interestingly, it was built mainly on Pytorch, with only the rasterization being built in CUDA. This is going to allow a larger base to be able explore with the code, but also leaves opportunities to potentially increase the speed and success rates by fulling porting it into CUDA in the future.
I'm interested to see how this would stack up against Baked SDF, MERF, and Zip-NeRF (with their 30K) in a head to head.
This comes as a exciting development forwards that decreases the barrier to having NeRFs rendering at high quality in real time.