On 4/10/2023, James Perlman released version 0.0.9 of TurboNeRF. Since we last covered his progress, there have been several product improvement updates including a higher reconstruction quality, export the dataset, and the cameras are adjustable!
Perlman has also stated that saving and loading snapshots is now running on his local device, but will be coming on a future update. Additionally, multiple NeRFs that can be rendered simultaneously will be coming soon!
James is also livestreaming the creation of TurboNeRF here if you would like to follow along.
Download the newest version here:
A new version of the TurboNeRF core binary is now available! PyTurboNeRF 0.0.9
A new version of the Blender addon is now available. TurboNeRF-Blender 0.0.9
Among those of note in each version are:
TurboNeRF 0.0.9
The near and far clipping planes are previewable
Fixed a bug where (randomly) after a while, blender would just render the same frame over and over again, leading to incorrect output.
Fixed a potential bug in the raymarcher
TurboNeRF 0.0.8
Significant increase in reconstruction quality and convergence speed, and a decrease in cloudiness
A lot of my real scenes I don't have to clip so close to the subject anymore.
Identified a mathematical error in the back propagation algorithm and fixed it.
TurboNeRF 0.0.7
Reset Training button added
Camera pose loading & image loading have been split into 2 steps
You can see the progress of images loading instead of just waiting as the ui hangs (good for larger datasets)
TurboNeRF 0.0.6
Fixed a bug where resizing the Blender window would crash PyTurboNeRF
More robust raymarching (less freezing during renders/previews) + enable using shift x & y properties on Blender camera
Fixes the divide-by-zero issue (affects scenes where input images have an alpha channel)
You can now see the training images composited in with the NeRF (show near/far clipping planes)! This is different from NeRFStudio & Luma AI. The images are composited IN the scene instead of ontop of the scene, and the images can intersect the volumes. Try bringing your training images as close to the subject as possible without intersecting anything. Near/far planes can be adjusted for all cameras on on a per-camera basis.
QUALITY IMPROVEMENT: Thanks to some tips from https://github.com/cheind/pure-torch-ngp/blob/develop/torchngp/training.py#L301-L314 I have implemented the "random background color" training technique... which involved updating the manual gradients. it was a headache but it leads to faster convergence, and does not rely on using alpha loss.
DATASET ADJUSTMENTS: besides being able to adjust the near/far plane per-camera, you can also just the camera transforms themselves. I'd recommend against updating single cameras. But there is now an empty object generated inside the NeRF called
CAMERAS
and you can grab and scale this object.PRO TIP: Grab and scale the
CAMERAS
object such that the subject of your scene fits into the 1x1x1 cube (the NeRF object has a wireframe cube associated with it. try to fit your subject inside this box).HOT RELOADING: Adjustments made to the dataset currently only apply while in preview mode. However you can make adjustments while training
TurboNeRF 0.0.5
Training algorithm should be approximately the same as 0.0.4, though I have made minor adjustments & optimizations
"Limit Training" checkbox lets you set a maximum number of steps to train to
Progress bar shows you what percentage of training steps are complete
Live count of number of steps (throttled to 4 updates per second to preserve UI responsiveness)
Live update of loss value (L1 smoothed loss, should report MSE & PSNR too at some point maybe?)
Update preview checkbox allows you to enable/disable the model from being regularly re-rendered ~ Model updates will still be rendered if you pan around it
Slider for setting # of steps between updates
Preview NeRF button automatically sets blender to TurboNeRF renderer & render preview configuration - Fixed bug where setting TurboNeRF render preview configuration would crash before you had started training a model