In an otherwise relatively small update, a highly requested feature has finally arrived: the ability to train across multiple GPUs.
This update is based on the NYU paper, On Scaling Up 3D Gaussian Splatting Training. Training times have significantly decreased, now taking roughly eleven and a half minutes for 30K steps using 4 GPUs, while memory usage drops to just 2GB per GPU.
Another exciting update comes from Meta researcher Jonathan Luiten, who you may recognize as one of the authors behind 4DGS and SPLATAM. With a minor adjustment, the details that were once fragmented or floaters now appear much sharper. This essentially fixes projection issues for images with non-centered cameras.
To see the full changelog, please see below.
[Docs] Evaluation metrics for mcmc, antialiased, and absgrad. by @maturk in #300
Update requirements.txt by @liruilong940607 in #308
Fix projection for images with non-centered camera (e.g. crops) by @JonathonLuiten in #305
prevent race condition when JIT in multiprocess by @liruilong940607 in #312
Support Multi-GPU training based on the paper "On Scaling Up 3D Gaussian Splatting Training" by @liruilong940607 in #253
Update version.py by @liruilong940607 in #313
Nerfstudio continues to be free, under the permissive Apache 2.0 License. If you've been thinking about contributing or submitting a PR, now's the time! Be sure to update to the latest version from their Github page.