Platforms

Snap NeRF Arrives: MobileR2L

Michael Rubloff

Michael Rubloff

Sep 14, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
MobileR2L Snap NeRF
MobileR2L Snap NeRF

It feels like it was just yesterday that I was talking about large tech companies utilizing NeRFs. That's probably because it was. Well another day, another multi-billion dollar tech company enters the NeRF fray. This time it's Santa Monica based camera application, Snap.

The feature is based upon Snap's paper featured at CVPR 2023, Real-Time Neural Light Field on Mobile Devices (MobileR2L), but was actually published in December of last year.

As almost all NeRF platforms recommend in one form or another, they have written out these suggestions:

Inconsistent Lighting: the NeRF model is sensitive to lighting. Do your best to keep the lighting uniform for all images.Non-uniform Radius: the best practice is to make sure your camera always lies on the “Meridian”Missing Views: make sure to take overlapping images when walking round the object.Blurry images: try to hold your phone steady when taking pictures

You might be wondering how the Snap team is able to achieve this, as there's been great speculation about how NeRFs will be able to run in real time.

Their method utilizes a Teacher-Student Training (Knowledge Distillation) method. This particular iteration utilizes NVIDIA's Instant NGP, as the teacher. First, a larger, more powerful model (the "teacher") is trained. Then, a smaller model (the "student") is trained to mimic the teacher's outputs. The end goal is often to have a smaller model that performs nearly as well as the larger model but is more computationally efficient. This can be broken out into a few steps.

  1. Training the Teacher Model

    • In this step, Instant-NGP is trained on the NeRF dataset. Once the teacher model is trained, it holds the knowledge of the dataset, i.e., it has learned the patterns, features, and other nuances of the data.

  2. Generate Pseudo Data with the Teacher Model:

    • Once the teacher is trained, it's used to generate "pseudo data". This step might sound redundant, but there's a method to the madness. Here's why this is valuable:

      • Data Augmentation: Pseudo data can be thought of as an advanced form of data augmentation. Instead of just rotating, cropping, or applying color adjustments (traditional data augmentation techniques), the teacher model is creating entirely new samples based on what it has learned. This can enrich the dataset, potentially making the subsequent training process more robust.

      • Knowledge Transfer: By generating pseudo data, the teacher model is indirectly transferring its learned knowledge. When the student model is trained on this data, it's like the student is getting hints or lessons directly from the teacher.

      • Tailored Data for Mobile Models: The generated pseudo data may contain features or patterns that are especially useful for training the smaller, mobile-friendly student model. It bridges the gap between the high-capacity teacher model and the lightweight student model, providing the latter with data that is tailored to its needs.

  3. Training the MobileR2L (Student Model)

    • Using Pseudo Data and Original Data:

      • When training the student model (in this case, MobileR2L), it will likely benefit from both the original NeRF dataset and the pseudo data generated by the teacher model.

      • Training on this combined dataset allows the student model to leverage the raw information from the original dataset and the "wisdom" distilled into the pseudo data by the teacher model.

  4. Step Training the MobileR2L (student model):

    • The documentation recommends running this step in a terminal using tmux, which is a terminal multiplexer. This is because the training may take a long time, and using tmux ensures that the training continues even if the connection to the terminal is interrupted.

    • The benchmarking_nerf.sh script is executed to begin training.

  5. Exporting the MobileR2L Model:

    • After training completes, the MobileR2L model is converted to ONNX format, which is a portable model format that can be used across different deep learning frameworks.

    • The documentation provides two ways to do this:

      • The model automatically exports the ONNX files when it converges.

      • Alternatively, you can manually run a provided script to perform the export.

In either case, you get three ONNX files: sampler, embedder, and model. Once you have your three ONNX files, you are ready to deploy your Snap Lens. Instructions for that step are contained here.

Once you have completed that, you should be good to go! This seems like a no brainer for brands looking to market on Snap. Now companies can leverage NeRFs to create interactive experiences.One of the big highlights that Snap has chosen to focus on is virtual try ons and it makes a lot of sense given their demographic. But I believe there are so many more applications for users.

Snap represents the latest company to unveil a NeRF integration into their platform and I highly doubt they will be the last. As companies begin to understand and implement the technology, I believe that it will contribute as a sales driver and tool for users to interact with and get closer to brands.

Featured

Featured

Featured

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff