Guide

What are the NeRF Metrics?

Michael Rubloff

Michael Rubloff

Jun 12, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
PSNR
PSNR

I've become more accustomed to seeing repeating words and phrases within NeRF papers. Some of the most common are PSNR, SSIM, and LPIPS. All of them are used to evaluate the quality of each NeRF method within a given scene.

However, all three have been in existence well below NeRFs, so today I figured I would give a quick article on what the three terms are and measure, as well as highlight a new potential way to measure NeRF quality from a group of researchers at the University of Bristol.

While NeRF implementations can produce high-quality visual results, evaluating their performance accurately remains a challenge. Conventional evaluation methods provide approximate indicators and may not capture specific aspects of NeRFs. To address this issue, a group of researchers has proposed a new test framework that isolates the neural rendering network from the NeRF pipeline for more reliable evaluation.

What is PSNR (Peak Signal to Noise Ratio)?

Peak Signal-to-Noise Ratio (PSNR) is a metric widely used in the field of image and video processing to measure the quality of reconstructed (i.e., compressed and then decompressed) images or videos. It evaluates on a color-wise basis, which does have a bit of overlap with NeRFs view dependent colors. The higher the PSNR, the better is the quality of the compressed or reconstructed image, video, or NeRF.

What is SSIM (Structural Similarity Index Measure)?

SSIM is used for measuring the similarity between two images. The SSIM index is a full reference metric; in other words, the measurement or prediction of image quality is based on an initial uncompressed or distortion-free image as reference.

SSIM considers changes in structural information, perceived luminance, and contrast that can occur when images are subjected to various types of distortion. It aims to reflect the human visual system's perception more closely than simpler metrics like PSNR or mean squared error (MSE).

As SSIM computations are performed on image patches, they allow for some misalignment between the synthesized and reference images. This is helpful for evaluation as there may be variations between the NeRF camera model and the real camera used for capturing the training images.

What is LPIPS (Learned Perceptual Image Patch Similarity)?

Finally, Learned Perceptual Image Patch Similarity (LPIPS) has gained popularity in areas such as frame interpolation, measuring the similarity between features of two images extracted from a pretrained network.

Learned Perceptual Image Patch Similarity (LPIPS) is a perceptual metric that quantifies the human-perceived similarity between two images. Unlike traditional metrics such as PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity Index Measure), which calculate differences based on raw pixel values or simple transformations thereof, LPIPS leverages deep learning to better align with human visual perception. It uses the distance between features extracted by a convolutional neural network (CNN) pretrained on an image classification task as a perceptual metric.

What is Whole-scene Average Prediction Error (WAPE)?

With the three above metrics defined, a group of researchers at the University of Bristol advocate for a new metric that is NeRF specific: Whole-scene Average Prediction Error (WAPE).

The research paper introduces a configurable approach for generating representations specifically for evaluation purposes. This approach utilizes ray-casting to transform mesh models into explicit NeRF samples and "shade" these representations. By combining these methods, the researchers demonstrate how different scenes and types of networks can be evaluated within this framework. They also propose a novel metric called the Whole-scene Average Prediction Error (WAPE) to measure task complexity, considering visual parameters and the distribution of spatial data.

This framework first isolates the neural rendering network from the NeRF pipeline and then performs a parametric evaluation by training and evaluating the NeRF on an explicit radiance field representation.

By isolating the rendering network from the entire pipeline, it gives more tailored feedback to underlying errors within the reconstruction and gain a deeper understanding of how well each method works. For instance,

"as image-based metrics only evaluate the prediction quality via a 2-D projection, this can still result in loss of information about the accuracy of spatial samples in relation to their distribution in volumetric space."

The proposed framework addresses the limitations of image-based metrics by providing a parametric evaluation that compares the learnable outputs of INR rendering networks against ground truth data. By generating synthetic radiance fields from mesh-based representations and applying ray tracing, the researchers accurately represent ground truths and enhance the quality of visual features.

In theory, the more challenging data is to get a high quality NeRF, it should be graded differently to compensate. With that in mind, they introduce my favorite part of the paper, a new metric for evaluating task complexity.

This takes into account the number of input samples, the relative distribution of novel views and training views, and the functional complexity of the chosen ray tracing algorithm(s). The NeRFs that I have processed thus far have all been different in size and complexity and I believe this should be reflected in evaluation. It's also interesting that they interpret complexity as not only stemming from input views, but also extend it to the underlying positional distribution of ray samples.

As continues to be an emerging trend, WAPE's code works with Nerfstudio, though it doesn't appear to have been published yet.

Only time will tell if the NeRF community will embrace WAPE and begin weighing quality markers with it. It has been fascinating to weigh each method's PSNR, SSIM, and LPIPS scores, but if there's a way to get more accurate metrics and identify areas of NeRF specific improvement more efficiently, I'm all for it.

Featured

Featured

Featured

Platforms

OpenNeRF added to Nerfstudio

OpenNeRF is the latest method to be supported by Nerfstudio.

Michael Rubloff

May 24, 2024

Platforms

OpenNeRF added to Nerfstudio

OpenNeRF is the latest method to be supported by Nerfstudio.

Michael Rubloff

May 24, 2024

Platforms

OpenNeRF added to Nerfstudio

OpenNeRF is the latest method to be supported by Nerfstudio.

Michael Rubloff

Platforms

PlayCanvas's SuperSplat Updated with PWA support

PlayCanvas's Supersplat has continued to receive additional updates. This time it's coming with a big boost to performance, yet again.

Michael Rubloff

May 24, 2024

Platforms

PlayCanvas's SuperSplat Updated with PWA support

PlayCanvas's Supersplat has continued to receive additional updates. This time it's coming with a big boost to performance, yet again.

Michael Rubloff

May 24, 2024

Platforms

PlayCanvas's SuperSplat Updated with PWA support

PlayCanvas's Supersplat has continued to receive additional updates. This time it's coming with a big boost to performance, yet again.

Michael Rubloff

Research

Reflecting on NeRF-Casting

Late last year we looked at Uni-SDF which introduced dual radiance fields to better represent reflections in a scene. However, I just happened to see NeRF-Casting on Github a little while ago

Michael Rubloff

May 23, 2024

Research

Reflecting on NeRF-Casting

Late last year we looked at Uni-SDF which introduced dual radiance fields to better represent reflections in a scene. However, I just happened to see NeRF-Casting on Github a little while ago

Michael Rubloff

May 23, 2024

Research

Reflecting on NeRF-Casting

Late last year we looked at Uni-SDF which introduced dual radiance fields to better represent reflections in a scene. However, I just happened to see NeRF-Casting on Github a little while ago

Michael Rubloff

Platforms

Scaniverse arrives on Android

Gaussian Splatting platform, Scaniverse, is now available on Android.

Michael Rubloff

May 21, 2024

Platforms

Scaniverse arrives on Android

Gaussian Splatting platform, Scaniverse, is now available on Android.

Michael Rubloff

May 21, 2024

Platforms

Scaniverse arrives on Android

Gaussian Splatting platform, Scaniverse, is now available on Android.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Featured

Featured

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

Michael Rubloff

May 8, 2024

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Michael Rubloff

Apr 8, 2024

Gaustudio

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Michael Rubloff

Mar 15, 2024

SplaTV