
Michael Rubloff
Jan 25, 2026
It's a familiar problem getting home after capturing and realizing that some portion of frames were under or over exposed. While in 2024 this was alleviated to some degree from Bilateral Guided Radiance Fields, otherwise known as Bilagrid or Bilarf, this area has received a smaller amount of support than you would expect given its impact on real world capture.
This is where the new paper, PPISP: Physically-Plausible Compensation and Control of Photometric Variations in Radiance Field Reconstruction from NVIDIA comes in and it's a paper I was stoked to see! If you’ve struggled with floaters appearing in your scenes, you will be stoked too.
Its thesis is that a lot of the appearance noise that breaks multi view reconstruction isn’t random at all. It’s the camera doing camera things. Auto exposure drifting as you pan past a window. White balance nudging warmer as the scene changes. Vignetting and response curves that are stable for a given sensor, but different across devices. These effects are perfectly normal in photography, but they violate an assumption sitting underneath most radiance field optimization that the same 3D point should look the same across views, up to geometry and view dependency.
Historically, the radiance field world has handled this mismatch with a spectrum of strategies. Some methods add per image latent vectors that “soak up” photometric residuals. Others learn affine color transforms or per pixel correction grids. If your correction module has enough capacity, it can make training views look great, but it can also start explaining away things it shouldn’t. It may entangle with geometry, flatten true lighting cues, or generalize poorly when you render novel viewpoints.
PPISP uses a differentiable image signal processing pipeline that sits downstream of the radiance field renderer. Instead of asking the scene representation to match the final RGB pixels directly, PPISP treats the renderer as producing a kind of “raw radiance image,” then passes it through a sequence of camera inspired modules that mimic common steps in image formation.
In plain terms, PPISP breaks the camera’s contribution into a few parts that photographers will recognize immediately. First is an exposure adjustment. A per frame brightness scale that behaves like exposure value changes. If one frame is darker because the camera shortened shutter time or reduced gain, PPISP can represent that as a single exposure offset for that frame.
Next is vignetting. This is treated as a property of the sensor and lens system rather than something that should vary arbitrarily per image. Then comes color correction. A physically motivated way to shift chromaticities (including white balance behavior) while explicitly decoupling those changes from exposure. Brightness and color often get tangled in learned corrections. PPISP tries to separate them so that “make this frame brighter” doesn’t inadvertently also push shadows toward a different hue.
Finally, there’s a camera response function that turns sensor irradiance into the final pixel values. Instead of leaving that curve implicit inside a neural network, PPISP uses a compact parametric form meant to stay stable during optimization while still capturing the “S-curve” behavior we associate with cameras.
This pipeline becomes a differentiable approximation of a real camera’s image formation function. During reconstruction, the radiance field and the PPISP parameters are optimized jointly, so the system can explain away photometric inconsistencies using camera like knobs rather than forcing the scene representation itself to absorb them.
However, what's also very cool about PPISP isn’t only that it can correct training frames, it’s also how it handles novel views.
PPISP learns a controller that predicts per frame ISP parameters directly from the rendered radiance of the novel view. Conceptually, it’s modeled after what real cameras do. Auto exposure and auto white balance don’t require a ground truth image, they estimate settings based on what the sensor sees.
So PPISP trains in two phases. In phase one, it trains the reconstruction and camera specific parts of the ISP pipeline. In phase two, it freezes that foundation and trains the controller network to output exposure and color correction settings that make rendered views match the training images. At inference time, when you render a novel camera pose, the controller looks at the raw rendered radiance and predicts the adjustments needed, without needing any target image.
Because the pipeline parameters correspond to recognizable camera settings and behaviors, PPISP can incorporate information like relative exposure values when available. If you have EXIF derived exposure compensation or bracketed captures, those signals can either supervise the exposure offsets or be fed into the controller to improve predictions.
That’s important because the industry side of radiance fields often lives in capture workflows where metadata is abundant. The last few years trained the community to ignore EXIF because neural methods didn’t have an obvious place to use it. PPISP suggests a structure where metadata can actually help, without turning the whole pipeline into a calibration project. I have heard from people working with radiance fields in their industries that they greatly want an expansion of metadata output from reconstructions.
Another incredibly exciting thing is the generalized applicability PPISP carries. It will work across any of the radiance field representations, including gaussian splatting, 3DGUT, and NeRFs. In the paper, they integrate it on top of both 3DGUT and gsplat. I can also confirm that the method will be coming to both libraries soon.
Modern phone cameras do apply localized tone mapping, spatially adaptive processing, and scene dependent tricks that aren’t captured by PPISP’s clean parametric modules. And while the controller can behave like auto exposure in many sequences, it can still struggle in scenes that break the correlation between view content and exposure decisions.
Radiance fields are increasingly being treated as deployable media assets through gaussian splatting. As 3D reconstruction continues to be opened to the general public, raising the floor to consistent color across shots, predictable exposure behavior, and ultimately high fidelity results are critical.
Regardless, PPISP should help reconstruction fidelity across the board and I hope that it will be implemented in both local training and cloud based platforms. It carries with it a Apache 2.0 license and will be coming to both gsplat and 3DGRUT soon.






