Reflecting on NeRF-Casting

Michael Rubloff

Michael Rubloff

May 23, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
NeRF-Casting
NeRF-Casting

Late last year we looked at Uni-SDF which introduced dual radiance fields to better represent reflections in a scene. Recently, I came across NeRF-Casting on GitHub, and its fidelity immediately stood out to me.

The realism in the car bodies rendered by NeRF-Casting is significantly higher than previous methods of UniSDF and Zip-NeRF. I have quite a few car captures in my personal library and am excited to (eventually) be able to increase the fidelity of the captures.

NeRF Casting more directly introduces ray tracing into the NeRF rendering pipeline and is exceptionally noticeable in the foreground of scenes.

NeRF-Casting begins with the scene representation, which is encoded using a multi-scale hashgrid. This setup is similar to Zip-NeRF, but with some critical enhancements tailored for NeRF-Casting with four main steps.

  1. They query volume density along each camera ray to compute the ray’s expected termination point and surface normal.

  2. Cast a reflected cone through the expected termination point in the reflection direction.

  3. Use a small MLP to combine the accumulated reflection feature with other sampled quantities (such as the diffuse color features and per-sample blending weights) to produce a color value for each sample along the ray.

  4. Alpha composite these samples and densities into the final color.

NeRF Casting uses a streamlined multi-layer perceptron (MLP). Unlike traditional NeRFs, which rely on larger networks, NeRF-Casting uses a smaller MLP to decode feature vectors into colors. This smaller MLP not only speeds up the process but also ensures that the focus is on rendering the most relevant details.

The journey of a camera ray in NeRF-Casting starts with ray marching. As a ray enters the scene, it samples points along its path to determine the volume density at each point. This step is crucial as it identifies where the ray intersects surfaces within the scene.

Once a surface is identified, the next task is to estimate its normal, which is done by calculating the gradient of the volume density at the intersection point. Using this surface normal, NeRF-Casting calculates the reflection direction—an essential step for realistic reflections.

NeRF-Casting takes a different approach by casting a reflected cone. This cone represents a bundle of reflection rays emanating from the point where the initial ray hits a surface. By doing this, NeRF-Casting captures a comprehensive range of potential reflection paths, ensuring detailed and accurate reflections.

Within the reflected cone, the system integrates features from the 3D feature grid, gathering all necessary information about the environment. These reflection features are then decoded by the smaller MLP into radiance (color) values. This efficient processing of fewer but more relevant data points enhances both speed and quality.

The final step in NeRF-Casting is alpha compositing. For each sampled point along the camera ray, color and density values are integrated. The colors along the ray are weighted by their densities, resulting in a composited final color for each pixel. This method ensures that the rendered image includes highly detailed and consistent reflections.

While not perfect, NeRF-Casting represents a significant improvement in the accuracy of reflections in rendered scenes. The authors note that NeRF-Casting struggles with semi-transparent structures.

The paper is authored anonymously, but the code is written in JAX and is built on ideas from Zip-NeRF. It also employs data sampling methods from Mip-NeRF and Ref-NeRF. These clues suggest that the paper might be from Google.

There is a code page on Github available, but I doubt that we will see the code published soon.

NeRF-Casting takes approximately 100 minutes to optimize and requires a substantial amount of GPU power (six V100s). This means it probably won't be part of the everyday consumer's workflow anytime soon, but there's potential for it to be included in CloudNeRF, assuming Google is behind this innovation.

Featured

Recents

Featured

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff