The nerfstudio team had recently published nerfacto-big, a beefier, albeit slower version of nerfacto. It took about an hour to train, but it very quickly became my go-to model. The fidelity I was getting was significantly better, which at the end of the day is what's most important to me.
That said, everything is relative.
Nerfacto-Huge
A few days ago, nerfstudio merged some additional code into the main branch. One of those updates contained nerfacto-huge, created by Justin Kerr.
For those who don't know Justin, he is one of the co-authors of LERF and I was lucky enough to meet him a couple months back in Berkeley.
Nerfacto-huge takes a whopping 3-6 hours to train for me and consumes everything my GPU has with 24GB of VRAM. With all of this in mind, is it worth it? Yes, I think so without a doubt. I would not mind waiting 24 hours for a NeRF to train, if the output is that much better and this one is.
Depending on your GPU, each nerfacto method will be a better choice for you. The nerfstudio team outlines the VRAM draw for each method.
Here are some of the facets of nerfacto-big that were tweaked:
Batch size
Hidden dims for transient/MLP/color
Number of samples (cranked up to 512, 512, 64)
Hashgrid resolution
I've started to get in the habit of leaving my computer running overnight and to be honest, checking the final product in the morning almost has the same effect as coffee to me. Almost ;).
You should also check out Justin's other work, including LERF— it's easily a top contender for favorite paper this year and I believe will have significant impact on not only NeRFs, but several commercial industries.