Have you ever captured a dataset only to realize that your camera was set to manual mode, causing exposure to fluctuate? These inconsistencies can break the multi-view consistency crucial for Neural Radiance Fields (NeRF), resulting in degraded or failed reconstructions.
In my consulting work, a common question I encounter is about the ability to recolor or retouch images before processing the radiance field. Both of these issues are solved with Bilateral Guided Radiance Field Processing.
Not only can it handle fluctuating exposure levels, but you only need to edit a single image, and the method will propagate the style to the entire reconstruction. They also can handle HDR through their own pipeline that resembles the HDR+ algorithm, first merging a burst of frames and then applying local tone mapping for finishing.
The first stage involves disentangling image signal processing (ISP) enhancements. The researchers introduce a differentiable 3D bilateral grid that acts as a locally affine model, capable of approximating the complex, non-linear operations performed by ISPs on each input view. This grid captures and models view-dependent ISP enhancements, such as exposure adjustments and color corrections, by optimizing the NeRF along with these grids.
By separating the ISP processing effects from NeRF training, the reconstructed radiance fields maintain multi-view consistency and are largely free from artifacts like floaters. This method handles featureless spaces, such as walls, quite adeptly. The bilateral grids are optimized during the training process to closely match the ISP effects for each view, effectively learning the unique enhancements applied by the camera.
Once the NeRF training with disentangled ISP effects is complete, the finishing stage reapplying and enhancing these adjustments consistently across the entire 3D scene. Although it sounds like something from a Mortal Kombat game, the finishing stage is crucial.
The researchers propose a novel low-rank 4D bilateral grid to lift 2D view edits to 3D space. This grid allows users to perform detailed retouching on a single view and have those adjustments propagated consistently across the entire 3D scene. The low-rank approximation ensures the grid remains computationally feasible while effectively capturing the essential transformations needed for consistent 3D editing. Additionally they found low rank is a good prior for high-dimensional tensor completion. This idea is borrowed from the "low-rank matrix completion problem".
Users can apply various enhancements, such as color correction and brightness adjustment, to a selected view using familiar 2D photo editing tools, such as Lightroom. Adobe's recent efforts to build out their 3D reconstruction pipeline make it easy to imagine a future where Lightroom and other Adobe products extend to hyper-real 3D!
This method lifts these 2D edits to the 3D radiance fields, ensuring that adjustments are uniformly applied across all views, including full 360-degree captures. Additionally, users can continue to make edits in 2D, which can then be applied to the NeRF without needing to retrain. Keep in mind that you will need to rerun the finishing stage. For instance, you can adjust the lighting of a scene or simply recolor an object. This fine-grain control using common photo editors is a significant advantage.
The method is built on top of a PyTorch implementation of Zip-NeRF and can run on a single 3090 GPU. The process takes about two and a quarter hours, though this duration is directly tied to the NeRF method used. Adapting it to Instant-NGP or Nerfacto could result in significant speed boosts. The finishing stage operates much faster, currently taking about 15 minutes to complete.
With the code already released under an Apache 2.0 License, allowing for a multitude of use cases, it won't be long before you can revisit your HDR captures that never worked before. NeRFs continue to evolve with no known ceiling, making the future bright for radiance field technologies.
For more information, check out their project page or catch them at SIGGRAPH in Denver this July, where they will be presenting.