Research

Reflecting on NeRF-Casting

Michael Rubloff

Michael Rubloff

May 23, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
NeRF-Casting
NeRF-Casting

Late last year we looked at Uni-SDF which introduced dual radiance fields to better represent reflections in a scene. Recently, I came across NeRF-Casting on GitHub, and its fidelity immediately stood out to me.

The realism in the car bodies rendered by NeRF-Casting is significantly higher than previous methods of UniSDF and Zip-NeRF. I have quite a few car captures in my personal library and am excited to (eventually) be able to increase the fidelity of the captures.

NeRF Casting more directly introduces ray tracing into the NeRF rendering pipeline and is exceptionally noticeable in the foreground of scenes.

NeRF-Casting begins with the scene representation, which is encoded using a multi-scale hashgrid. This setup is similar to Zip-NeRF, but with some critical enhancements tailored for NeRF-Casting with four main steps.

  1. They query volume density along each camera ray to compute the ray’s expected termination point and surface normal.

  2. Cast a reflected cone through the expected termination point in the reflection direction.

  3. Use a small MLP to combine the accumulated reflection feature with other sampled quantities (such as the diffuse color features and per-sample blending weights) to produce a color value for each sample along the ray.

  4. Alpha composite these samples and densities into the final color.

NeRF Casting uses a streamlined multi-layer perceptron (MLP). Unlike traditional NeRFs, which rely on larger networks, NeRF-Casting uses a smaller MLP to decode feature vectors into colors. This smaller MLP not only speeds up the process but also ensures that the focus is on rendering the most relevant details.

The journey of a camera ray in NeRF-Casting starts with ray marching. As a ray enters the scene, it samples points along its path to determine the volume density at each point. This step is crucial as it identifies where the ray intersects surfaces within the scene.

Once a surface is identified, the next task is to estimate its normal, which is done by calculating the gradient of the volume density at the intersection point. Using this surface normal, NeRF-Casting calculates the reflection direction—an essential step for realistic reflections.

NeRF-Casting takes a different approach by casting a reflected cone. This cone represents a bundle of reflection rays emanating from the point where the initial ray hits a surface. By doing this, NeRF-Casting captures a comprehensive range of potential reflection paths, ensuring detailed and accurate reflections.

Within the reflected cone, the system integrates features from the 3D feature grid, gathering all necessary information about the environment. These reflection features are then decoded by the smaller MLP into radiance (color) values. This efficient processing of fewer but more relevant data points enhances both speed and quality.

The final step in NeRF-Casting is alpha compositing. For each sampled point along the camera ray, color and density values are integrated. The colors along the ray are weighted by their densities, resulting in a composited final color for each pixel. This method ensures that the rendered image includes highly detailed and consistent reflections.

While not perfect, NeRF-Casting represents a significant improvement in the accuracy of reflections in rendered scenes. The authors note that NeRF-Casting struggles with semi-transparent structures.

The paper is authored anonymously, but the code is written in JAX and is built on ideas from Zip-NeRF. It also employs data sampling methods from Mip-NeRF and Ref-NeRF. These clues suggest that the paper might be from Google.

There is a code page on Github available, but I doubt that we will see the code published soon.

NeRF-Casting takes approximately 100 minutes to optimize and requires a substantial amount of GPU power (six V100s). This means it probably won't be part of the everyday consumer's workflow anytime soon, but there's potential for it to be included in CloudNeRF, assuming Google is behind this innovation.

Featured

Featured

Featured

Research

Diffusion Based 3DGS Relighting

Research topics on relighting continue to increase sharply.

Michael Rubloff

Jul 2, 2024

Research

Diffusion Based 3DGS Relighting

Research topics on relighting continue to increase sharply.

Michael Rubloff

Jul 2, 2024

Research

Diffusion Based 3DGS Relighting

Research topics on relighting continue to increase sharply.

Michael Rubloff

Platforms

Voluma.ai adds Timelapse Mode

Voluma.ai, specializing in the creation and augmentation of 3D graphics, has introduced its latest feature: timelapse mode.

Michael Rubloff

Jul 1, 2024

Platforms

Voluma.ai adds Timelapse Mode

Voluma.ai, specializing in the creation and augmentation of 3D graphics, has introduced its latest feature: timelapse mode.

Michael Rubloff

Jul 1, 2024

Platforms

Voluma.ai adds Timelapse Mode

Voluma.ai, specializing in the creation and augmentation of 3D graphics, has introduced its latest feature: timelapse mode.

Michael Rubloff

News

GeoWeek to Present Free Webinar on Radiance Fields

The event promises to be an engaging and informative session with a strong lineup of panelists.

Michael Rubloff

Jun 28, 2024

News

GeoWeek to Present Free Webinar on Radiance Fields

The event promises to be an engaging and informative session with a strong lineup of panelists.

Michael Rubloff

Jun 28, 2024

News

GeoWeek to Present Free Webinar on Radiance Fields

The event promises to be an engaging and informative session with a strong lineup of panelists.

Michael Rubloff

Platforms

Luma AI Launches Keyframes for Dream Machine

The Luma team is back again with new features for Dream Machine

Michael Rubloff

Jun 27, 2024

Platforms

Luma AI Launches Keyframes for Dream Machine

The Luma team is back again with new features for Dream Machine

Michael Rubloff

Jun 27, 2024

Platforms

Luma AI Launches Keyframes for Dream Machine

The Luma team is back again with new features for Dream Machine

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Research

Mip-Splatting: Anti-Aliasing for Gaussian Splatting

If you've been trying out Gaussian Splatting, you may have noticed that the scene tends to degrade quickly, especially when you pull outside of the capture path or change the focal length, compared to NeRF.

Michael Rubloff

Nov 28, 2023

Research

Mip-Splatting: Anti-Aliasing for Gaussian Splatting

If you've been trying out Gaussian Splatting, you may have noticed that the scene tends to degrade quickly, especially when you pull outside of the capture path or change the focal length, compared to NeRF.

Michael Rubloff

Nov 28, 2023

Research

Mip-Splatting: Anti-Aliasing for Gaussian Splatting

If you've been trying out Gaussian Splatting, you may have noticed that the scene tends to degrade quickly, especially when you pull outside of the capture path or change the focal length, compared to NeRF.

Michael Rubloff