Google CloudNeRF: Zip-NeRF and CamP in the Cloud

Michael Rubloff

Michael Rubloff

May 8, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

I'm going to walk you through how you can also create similar output using your existing data. It takes roughly ~$20 to process a capture, but I've been able to get everyone $350 in credits to start, for free! Click this link and create a Google Cloud account and it will auto deposit the $350 credit. To be transparent, for signing up, I may receive some affiliate commissions, which would greatly help fund this site.

There’s also a pricing calculator that Google has provided too!

There are two things you'll need to do to get started and that's to request a quota increase on GPUs and CPUs. The standard allocation will not be enough to let you get started. By going to the Quotas page, you can request an increase for custom model training NVIDIA V100 GPUs and separately for Custom model training CPUs for N1/E2. For the quota increase, you’ll need to request 8 GPUs and 64 CPU cores. You’ll notice that there are more than one option for each and that they’re based on location. I would recommend only selecting a couple of options, as you’ll have to have it align with your bucket location in the next step. I mainly selected the options contained in the US. 

If you’re having any trouble with requesting a quota increase, you can use this article to help.

Once your quota has been approved, you can create your bucket. The bucket creation stage is pretty straight forward. You'll need to make a unique identifier for the bucket, just as you would for a social media account. For instance, you won't be able to call it Google. Note that you will need to set the region to the same region as your GPU and CPU allotment. I have used the US-Central Iowa option because it was cheaper. Once you have created a bucket, you will be able to reuse it for subsequent projects, so long as you create separate folders underneath it for each project. 

If you’d like, you can also have GoogleCloud create a bucket for you in a later step, but I’ve found it easier to create it upfront. This way if you’re using custom data, you can start uploading your images earlier.

Once you have it created, you can simply click into the bucket and upload a folder with your assembled collection of images. 

From the CloudNeRF page, you can get the overview of the method, ZipNeRF and CamP. Select the "Open Notebook" option and it will give you two choices to pick from, either a Jax implementation or Pytorch. The JAX implementation should be a bit faster in addition to having higher accuracy.

Once the Notebook has been opened, scroll down to the first step, under Setup. This is where you will input the bucket information that CloudNeRF will pull from. Make sure that you have gs:// in the front of it. So if your bucket was called CloudNeRF, you would fill in gs://CloudNeRF. If you decided to let Google Cloud create the bucket for you, leave it blank and continue. 

Hit the mini gray play button to the left of the step and it will set up a run time and prepare the bucket. You'll know that each step was completed correctly when there is a small green checkmark next to the step's play button. You can also check the logs, which will give you an overview of where in the process the step is.

If you are using custom data, rather than the provided Mip-NeRF 360 dataset, you can skip the “Prepare” dataset step and go straight to the NeRF Pipeline underneath. This is where the camera alignment phase comes in and it asks for where the images specifically are located. You'll need to indicate specifically where you have your images, down to the exact folder. So if you have datasets with different resolutions, you'll need to indicate which one you want to use.

You can get the exact path for the input images from the bucket and input them here. Just make sure that you still have the gs:// at the front. Taking the CloudNeRF bucket example again, your input might look like gs://CloudNeRF/datasetname/images

The output is ever so slightly more complicated, in that you are instructing the method where to place the created data. I would recommend pointing it in the following way: gs://CloudNeRF/exp/datasetname, where exp stands for experiment. 

I've found this step to take roughly 25 minutes to run, with the first seven minutes or so showing up as PENDING, so don't be alarmed if it doesn't start saying RUNNING immediately. You can also click the Log URL which is generated right after you start running a command. It's really helpful to view progress.

Once the COLMAP stage is complete and there is a green check mark next to the play button, you can start training Zip-NeRF. The only manipulatable item here is the factor number between 0, 2, 4, or 8. These essentially indicate an image downsampling number, with zero using the original image size. With that in mind, I've used 0.

The training stage has taken roughly 20 minutes for me, with datasets of about 100 6K images. As a comparison, in Nerfstudio, it takes me roughly 55 minutes to run datasets of the same size. There isn't a viewer, but you can monitor the training progress from the Logs, which will tell you what training step it's at and the PSNR.

Finally once training is complete, you can generate an orbit render of your scene. You can pick your resolution, frame rate, and length of the video, i.e. 150 frames at 30 fps is 5 seconds.

To cut down on costs, I would recommend exporting 1080p and then upscaling as desired afterwards.

Surprisingly this is the longest step of the process for me, but once it's done, you can either bring the rendered video up from the code section on the page or just simply go into your bucket and click through the /exp folder you created to find the videos.

There is a way to render out custom camera paths, though I have not explored it yet. There isn't currently a viewer in place, but given that Google also has SMERF, perhaps we will see the inclusion of the SMERF viewer into CloudNeRF!

Featured

Recents

Featured

Platforms

Postshot V.5 Released

The newest version of Postshot is here!

Michael Rubloff

Dec 23, 2024

Platforms

Postshot V.5 Released

The newest version of Postshot is here!

Michael Rubloff

Dec 23, 2024

Platforms

Postshot V.5 Released

The newest version of Postshot is here!

Michael Rubloff

News

Create a Hyper-Real Holiday Card

Just in time for the holidays comes a way to share hyper real holiday wishes!

Michael Rubloff

Dec 23, 2024

News

Create a Hyper-Real Holiday Card

Just in time for the holidays comes a way to share hyper real holiday wishes!

Michael Rubloff

Dec 23, 2024

News

Create a Hyper-Real Holiday Card

Just in time for the holidays comes a way to share hyper real holiday wishes!

Michael Rubloff

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff