MS-NeRF: Sharper Reflections with Multi-Space Neural Radiance Fields
Michael Rubloff
May 9, 2023
One of the big differentiators of NeRF compared to existing methods is the ability to process reflections. That doesn't mean that there isn't room for improvement. NeRF will occasionally return blurred results from reflections such as mirrors. Yesterday, MS-NeRF was unveiled as a lightweight supplement that enables most NeRF methods to showcase more accurate reflection and refraction.
MS-NeRF does this by using a group of feature fields in parallel sub-spaces, providing a better understanding of reflective and refractive objects. This multi-space scheme allows MS-NeRF to automatically handle mirror-like objects in 360-degree high-fidelity rendering without manual labeling.
The MS-NeRF method works as an enhancement to existing NeRF methods, with minimal computational overheads needed for training and inferring extra-space outputs. This lightweight module is critical, as NeRF can be extremely computationally heavy and require a lot of VRAM.
To demonstrate the results of MS-NeRF, comparisons were performed using a new dataset consisting of 25 synthetic scenes and 7 real captured scenes with complex reflection and refraction.
It's pretty astounding. The results showed that MS-NeRF significantly outperforms existing single-space NeRF methods in rendering high-quality scenes concerned with complex light paths through mirror-like objects.
MS-NeRF is poised to revolutionize the way we render complex scenes with reflective and refractive objects. With its multi-space approach, MS-NeRF could pave the way for more accurate and realistic rendering in the fields of computer vision, computer graphics, and beyond.
There are several amazing examples contained in the paper, but it's important to realize that there have also been amazing results shown from the NeRF community prior, such as SmallFly's mirror work. Nonetheless, it's amazing to see higher clarity be introduced without whatever SmallFly's secret method is! Now I'm excited to see what the community is able to achieve with this new paper and how it brings full memory capture one step closer.