Guest Article

Understanding NeRF on a small scale

Ibrahim Farhat

Ibrahim Farhat

May 6, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
TinyNeRF
TinyNeRF

Introduction:

Neural Radiance Field (NeRF) research has attracted significant attention recently, with 3D modelling, virtual/augmented reality, and visual effects driving its application. While current NeRF implementations can produce high quality visual results. The first NeRF model proposed by Mildenhall et al in their paper NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis is the first stone that triggered the mass research in this field. Understanding the basic concepts of this work is crucial to understand novelty in the field. So, in this article, the reader will gain a practical explanation of NeRF on a small scale, which covers most of the concepts of the original paper.

Basic concepts:

Before diving into the explanation of NeRF, I would like you to understand two basic concepts in the volumetric world: scene representation and volumetric rendering.

Scene representation: In volumetric graphics, scene representations can be categorized into two main types: explicit and implicit representations. Each type has a distinct way of describing the geometry and volume of a scene.

Explicit scene representation: Explicit representations directly describe the geometry and surface of objects using numbered elements. These elements are usually defined in a way that explicitly outlines the characteristic of the objects within the scene. Point Cloud and Voxel Grids are the most famous explicit representation of 3D scene as shown in Fig.1

  • Point Clouds: These are collections of points in space, where each point represents a part of the surface or volume, explicitly defining the structure through discrete locations.

  • Voxel Grids: Regular grids and sparse voxel octrees directly represent volume by dividing space into discrete, small volumetric elements (voxels).


    Fig 1: Explicit representation: Point Cloud and Mesh

Implicit scene representation: Implicit representations describe a scene through a functions or fields that define where the surface of an object exists based on the evaluation of these functions at any point in space.

Sign Distance Functions (SDF), describe the surface implicitly by a function that gives the shortest distance from any point in space to the nearest surface, with the surface itself represented where the distance is zero.

Function-based Representations: We use functions to define surfaces implicitly. As an example is a function that defines the colors of every point in the 3D space.

Volumetric Rendering: Ray marching and Alpha-blending

Fig.3: Volumetric rendering

Ray marching: is a technique used to render 3D scenes by progressively stepping along a ray and sampling the volume of data through which it passes, as it can be seen in the figure above. This method is suited to volumetric rendering, including both explicit and implicit volumetric representations.

In ray marching, rays are cast from the camera (or eye position) into the scene for each pixel on the view plane. As a ray advances into the volume, it samples data at predefined intervals along its path. Here are the key steps involved in ray marching:

  1. Initialization: A ray is cast from the camera through a pixel into the scene.

  2. Stepping: The ray advances in steps through the volume. At each step, the volume data is sampled.

  3. Sample Evaluation: At each step, the sampled value is used to determine properties such as color, density, and opacity at that point in the volume. This is where the distinction between explicit and implicit representations comes into play

    1. For explicit volumes (like point cloud), the sampled value is often directly retrieved from the data stored in the point cloud data.

    2. For implicit volumes (like SDFs), the value is computed using the implicit function that defines the volume.

  4. Accumulation: Properties from each sampled point are accumulated to compute the final color and opacity of the ray, which will correspond to the pixel color on the view plane.

  5. Termination: The ray marching continues until the ray exits the volume or the accumulated opacity reaches a threshold, indicating full opacity (no further contributions are visible).

Alpha Blending: Alpha blending is used in conjunction with ray marching to accumulate the color contributions from samples along the ray. It simulates the absorption and scattering of light as it travels through the volume. Each sample point contributes a certain color and a certain amount of opacity (alpha value), which affects the visibility of subsequent samples. The typical compositing formula used in alpha blending is:

Where:

  • C_src is the color of the source sample.α is the opacity of the source sample.

  • C_dst​ is the current accumulated color along the ray.

  • C_out​ is the new accumulated color after blending the source sample.

As the ray marches through the volume, alpha blending is performed iteratively:

  1. A sample’s color and alpha are determined based on the volume data.

  2. This color is blended with the accumulated color from previous samples using the alpha blending formula.

  3. The new accumulated color becomes C_dst​ for the next sample along the ray.

This process is akin to layering semi-transparent paints, where each layer can obscure the layers behind it to varying degrees.

Impact on Explicit vs Implicit Representations

Explicit Volumes: Ray marching with alpha blending can directly access the discrete data points, making the computation straightforward but potentially memory-intensive due to the dense storage of points (for point clouds).

Implicit Volumes: Here, alpha blending is combined with function evaluation. This means that for every point in the space you have to query the function to get the color and density information which leads to an increased computational complexity.

What is a NeRF?

A neural radiance field (NeRF) is a fully-connected neural network that creates an implicit representation of complex 3D scenes, based on a partial set of 2D images as it can be seen in Fig.4.

  • It learns to predict for every point in the 3D space:

  • View-dependent color: Red, Green, Blue

  • Density: representing the transparency of the point=>

This means that we have a NN for every scene

Fig.4: Training a NeRF model.

The neural network model utilized in this NeRF implementation is a fully-connected dense network. It processes five inputs per point in 3D space: three for the spatial coordinates (x, y, z) and two additional inputs for the viewing direction. The network produces four outputs for each point: three correspond to the RGB (red, green, blue) color channels, and one for the density of the point, as it can be seen in the below figure.

Fig.5: Neural Network input output

Fig.6: summary of the training process.

The figure above illustrates the process involved in Neural Radiance Fields (NeRF) for reconstructing 3D scenes. The process begins with data collection, where you gather the necessary input images of the scene from various angles. Each image is then associated with its corresponding viewing direction information. Next, this combined data is input into two distinct types of Neural Networks utilized in NeRF:

  1. Coarse Neural Network: This network initially processes the data to create a rough approximation of the 3D scene. It helps in establishing a baseline geometry and volume density from the input images and viewing directions.

  2. Fine Neural Network: Following the coarse estimation, the fine neural network refines these preliminary outputs. It enhances the details and accuracy of the scene reconstruction, producing higher resolution and more precise radiance fields.

Finally, after several iterations, the process generates an implicit representation of the 3D scene. This allows for the rendering of novel views, enabling observers to visualize the scene from perspectives not originally captured in the input images.

Pytorch Implementation for Tiny NeRF:

This GitHub repository hosts the tiny-NeRF Pytorch implementation. You can access the code at this GitHub repo. I recommend setting up the environment and training the model for the Lego scene example (provided with the code) before finishing the article!

This Tiny-NeRF implementation draws inspiration from the Tiny-NeRF model mentioned in the original NeRF paper and serves as a simplified version maintaining the same architectural framework. The table below outlines the primary differences between the two implementations. Essentially, this version simplifies the process by eliminating the hierarchical sampling technique and employing a single neural network, rather than the two networks used in the original setup.

Understanding the data flow:

Fig4: Data flow for one training iteration

Step #1:

In the initial phase, the software will create various projection plans based on the camera position data provided as input. For each camera viewing direction we generate 64 frames, each with dimensions of 100x100 pixels. This results in a total of 64x100x100 pixels for each of the x, y, and z coordinates. These planes represent the actual world positions as seen from the input viewing angle. The focal length of the camera, which influences the depth and perspective of the rays is used to get accurate x,y z positions for every pixel.pts_flat, z_vals = torch_get_rays_sample_space(H, W, focal, pose, near, far, N_samples, rand=True) # sampling 3D space

## torch_get_rays_sample_space definitiondef torch_get_rays_sample_space(H, W, focal, c2w, near, far, N_samples, rand=False):if isinstance(c2w, np.ndarray):c2w = torch.from_numpy(c2w).float()i, j = torch.meshgrid(torch.arange(W, dtype=torch.float32), torch.arange(H, dtype=torch.float32), indexing='xy')dirs = torch.stack([(i - W * 0.5) / focal, -(j - H * 0.5) / focal, -torch.ones_like(i)], dim=-1)rays_d = torch.sum(dirs[..., None, :] * c2w[:3, :3], dim=-1)rays_o = c2w[:3, -1].expand(rays_d.shape)z_vals = torch.linspace(near, far, N_samples)z_vals = z_vals.expand(rays_o.shape[0], rays_o.shape[1], N_samples)z_vals = z_vals.clone() if rand:z_vals += torch.rand(rays_o.shape[0], rays_o.shape[1], N_samples) * (far - near) / N_samplespts = rays_o[..., None, :] + rays_d[..., None, :] * z_vals[..., :, None]pts_flat = pts.view(-1, 3)return pts_flat, z_vals

Step #2:

In the second step, the software flattens the planes for each coordinate position — x, y, and z. This results in three vectors, each containing 640,000 elements.

pts_flat = pts.view(-1, 3)

Step #3:

In the third step, the position vectors are fed into the positional encoding process, which elevates the position information to a higher dimension. Using the below function, this process introduces 36 new dimensions for the x, y, and z coordinates. These dimensions are derived by applying cosine and sine functions to the x, y, and z values. This technique enables the model to capture high-frequency details in the final 3D model, enhancing its spatial resolution and detail.

pts_flat_enc = posenc(pts_flat, pos_enc_l) ## posenc definitiondef posenc(x, L_embed):rets = [x]for i in range(L_embed):for fn in [torch.sin, torch.cos]:rets.append(fn(2.**i * x))return torch.cat(rets, dim=-1)

Step #4:In the fourth step, the inputs are fed into a NeRF model, defined as shown in the below code block, for training. In this implementation, the NeRF is dense neural network that has 8 layers with 256 width each. For every position, the NeRF model outputs four values corresponding to the red (R), green (G), blue (B) color channels, and the density of that specific point in space.

model.train()optimizer.zero_grad()raw = model(pts_flat_enc)## model definition class MyModel(nn.Module):def __init__(self, widths, L_embed=6, use_dropout=False, use_batch_norm=False):super(MyModel, self).__init__()if L_embed is None:raise ValueError("L_embed must be provided")input_dim = 3 + 3 * 2 * L_embed # Calculate input dimension based on embedding lengthself.layers = nn.ModuleList()self.use_dropout = use_dropoutself.use_batch_norm = use_batch_normif self.use_batch_norm:self.norms = nn.ModuleList()if self.use_dropout:self.dropouts = nn.ModuleList()previous_width = input_dimfor width in widths:layer = nn.Linear(previous_width, width)init.xavier_uniform_(layer.weight)init.zeros_(layer.bias)self.layers.append(layer)if self.use_batch_norm:self.norms.append(nn.BatchNorm1d(width))if self.use_dropout:self.dropouts.append(nn.Dropout(0.1))previous_width = width # Update the input dimension for the next layer# The output layer now takes the last width in the list as its input sizeself.output_layer = nn.Linear(widths[-1], 4)init.xavier_uniform_(self.output_layer.weight)init.zeros_(self.output_layer.bias)def forward(self, x):for i, layer in enumerate(self.layers):x = layer(x)if self.use_batch_norm and i < len(self.norms): # Check added to avoid out of index errorsx = self.norms[i](x)x = F.relu(x)if self.use_dropout and i < len(self.dropouts): # Check added to avoid out of index errorsx = self.dropouts[i](x)x = self.output_layer(x)return x

Step #5:

The outputs from the NeRF model are organized into four vectors: three for the RGB color channels and one for density. These vectors are reshaped to represent a sampled 3D space, with each element consisting of 64 frames, each frame sized 100x100. This results in a four-dimensional array with dimensions 4x64x100x100, representing the model’s predictions for every point (x, y, z) in the 3D space from a specific viewing direction.

raw = raw.view(H, W, N_samples, 4) # 4D arraysigma_a = F.relu(raw[..., 3]) # densityrgb = torch.sigmoid(raw[..., :3]) # color

Step #6:

In this step, rendering is performed using ray marching combined with alpha-blending techniques. It uses the segma_a and rgb outputs of the NeRF to render a 2D frame for the input view (stored in z_vals). In addition, it outputs the depth and accumulation map.

rgb_render = render_rays(sigma_a, rgb, z_vals)## functiondef render_rays(sigma_a, rgb, z_vals):# Calculate distances between z valuesdists = torch.cat([z_vals[..., 1:] - z_vals[..., :-1], torch.full(z_vals[..., :1].shape, 1e10, device=z_vals.device)], -1)#print(sigma_a.shape, dists.shape)alpha = 1.0 - torch.exp(-sigma_a * dists)padded_alpha = torch.cat([torch.ones_like(alpha[..., :1]), 1.0 - alpha + 1e-10], dim=-1)weights = alpha * torch.cumprod(padded_alpha, dim=-1)[..., 1:]rgb_map = torch.sum(weights[..., None] * rgb, dim=-2)depth_map = torch.sum(weights * z_vals, dim=-1)acc_map = torch.sum(weights, dim=-1)return rgb_map, depth_map, acc_map

Step #7:

The final step involves computing the loss using Mean Squared Error (MSE) combined with gradient descent is employed to update the weights of the model, as it can be seen in the below code.

loss = nn.MSELoss()### Train and rendertrain_loss = loss(rgb_render, target)train_loss.backward()optimizer.step()

From step #1 to step #7, one complete iteration is conducted. To achieve a satisfactory scene representation, this tiny-NeRF implementation typically requires at least 1,000 iterations. It’s important to note that the original NeRF model requires up to 100,000 iterations to achieve a detailed and accurate scene representation. This latter highlights the computational intensity and scale of training required for NeRF models, which is one of challenges of NeRFs.

Conclusion & Future works:

Neural Radiance Fields (NeRF) represent a significant advancement in volumetric rendering, offering solutions to traditional challenges such as large 3D model sizes and capturing view-dependent color variations. However, NeRF introduces its own set of challenges, primarily due to its reliance on intensive computational resources. Training a NeRF model is a resource-intensive process, requiring high-performance GPUs and substantial time, which can span several hours to days depending on scene complexity. This tiny NeRF implementation in PyTorch provides a practical insight into how a NeRF model is trained and offers a foundational understanding of NeRF on a smaller scale. However, it does not encompass all features of the original NeRF framework. Many new models have emerged since NeRF, each addressing different challenges; one such model is 3D Gaussian Splatting (3DGS), which excels in both rendering quality and speed. Stay tuned for another article on implementing 3DGS on a small scale, which will further explore this high-performing model.

Featured

Featured

Featured

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

May 17, 2024

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

May 17, 2024

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

May 15, 2024

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

May 15, 2024

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

May 14, 2024

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

May 14, 2024

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

To embed a website or widget, add it to the properties panel.

Trending articles

Trending articles

Trending articles

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Ibrahim Farhat

Written by Ibrahim Farhat

Ibrahim Farhat Is a Researcher at the Digital Sciences Research Center at the Technology Innovation Institute (TII), Abu Dhabi.

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

Featured

Featured

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

Michael Rubloff

May 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Michael Rubloff

Apr 8, 2024

Gaustudio

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Michael Rubloff

Mar 15, 2024

SplaTV