DIFIX3D+: Removing Artifacts and Floaters in Captures

DIFIX3D+: Removing Artifacts and Floaters in Captures

DIFIX3D+: Removing Artifacts and Floaters in Captures

Michael Rubloff

Michael Rubloff

Mar 24, 2025

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
difix3+
difix3+

It shouldn’t surprise anyone here about my fascination with radiance fields and the power of reconstructing the world into lifelike 3D. However, sometimes when you capture, you might miss a section, or perhaps your camera shakes, or another unexpected thing happens.

While sometimes it's unavoidable and sometimes it’s a user-induced error, these mistakes lead to artifacts and missing sections in your captures. A large volume of research has gone into solving this problem, and long-time readers of this site remember when we wrote about Nerflix in 2023. I’ve witnessed steady improvements, but DIFIX3D+ takes things up a notch.

DIFIX3D+ addresses these challenges by tapping into the rich priors of large-scale 2D generative diffusion models. Whereas many approaches query diffusion models during every training iteration (a process that can be slow and unwieldy), DIFIX3D+ adapts a single-step diffusion model, dubbed DIFIX, to correct artifacts in one efficient pass. This model is fine-tuned from an image diffusion framework (SD-Turbo), and it learns to “fix” the rendered novel views by conditioning on clean reference views.

Rather than waiting until the end of training to improve the overall 3D model, DIFIX3D+ employs a progressive update scheme. Cleaned views produced by DIFIX are distilled back into the 3D representation. In practice, the pipeline starts with an initial reconstruction and then iteratively perturbs camera poses, renders novel views, enhances them via DIFIX, and uses the improved images to update the 3D model. This iterative refinement ensures that even areas far from the original input views receive robust 3D cues, leading to a more consistent and faithful reconstruction.

Finally, at inference time, the same DIFIX model provides an additional real-time polishing step. After the 3D model produces its final novel view, DIFIX is applied once more to remove any lingering imperfections. Thanks to its single-step design, this post-processing adds only a minimal delay, making it practical for real-time applications.

DIFIX3D+ is a pipeline that leverages single-step diffusion models to both enhance and correct artifacts in radiance field representations. It further only requires a minimum of 8GB of VRAM, making it compatible with most commercial GPUs.

While it only specifically calls out its compatibility with NeRFs and Gaussian Splatting, I don’t see why this couldn’t be used with other radiance field representations, such as SVRaster and Radiant Foam.

Additionally, I believe this will make a large impact in the world of simulation for both generative workflows, using models like Cosmos, in addition to captures of the real world. The code is set to be released at some point in the future. We will keep you updated as it moves towards a release. More information can be found on their Project Page.

Featured

Recents

Featured

Platforms

3DGS to Dense Point Cloud V2

3DGS to Dense Point Cloud returns with awesome updates!

Michael Rubloff

Apr 16, 2025

Platforms

3DGS to Dense Point Cloud V2

3DGS to Dense Point Cloud returns with awesome updates!

Michael Rubloff

Apr 16, 2025

Platforms

3DGS to Dense Point Cloud V2

3DGS to Dense Point Cloud returns with awesome updates!

Michael Rubloff

Platforms

Preferred Networks Inc. Announces 3DGS Unreal Engine 5 Plugin

Tokyo based Preferred Networks Inc has announced an exciting upcoming UE 5 Plugin.

Michael Rubloff

Apr 15, 2025

Platforms

Preferred Networks Inc. Announces 3DGS Unreal Engine 5 Plugin

Tokyo based Preferred Networks Inc has announced an exciting upcoming UE 5 Plugin.

Michael Rubloff

Apr 15, 2025

Platforms

Preferred Networks Inc. Announces 3DGS Unreal Engine 5 Plugin

Tokyo based Preferred Networks Inc has announced an exciting upcoming UE 5 Plugin.

Michael Rubloff

Platforms

404—GEN: Bringing Text to 3D Gaussian Splatting to Unity

A free text to 3D generator in Unity using 3DGS.

Michael Rubloff

Apr 14, 2025

Platforms

404—GEN: Bringing Text to 3D Gaussian Splatting to Unity

A free text to 3D generator in Unity using 3DGS.

Michael Rubloff

Apr 14, 2025

Platforms

404—GEN: Bringing Text to 3D Gaussian Splatting to Unity

A free text to 3D generator in Unity using 3DGS.

Michael Rubloff

Interview

Interview with Rev Lebaredian: Simulation, Robotics, and the Future of Imaging

At NVIDIA’s GTC conference, Rev Lebaredian discusses how simulation can power the next wave of AI, from lifelike 3D reconstruction to industrial scale robotics and physical intelligence.

Michael Rubloff

Apr 14, 2025

Interview

Interview with Rev Lebaredian: Simulation, Robotics, and the Future of Imaging

At NVIDIA’s GTC conference, Rev Lebaredian discusses how simulation can power the next wave of AI, from lifelike 3D reconstruction to industrial scale robotics and physical intelligence.

Michael Rubloff

Apr 14, 2025

Interview

Interview with Rev Lebaredian: Simulation, Robotics, and the Future of Imaging

At NVIDIA’s GTC conference, Rev Lebaredian discusses how simulation can power the next wave of AI, from lifelike 3D reconstruction to industrial scale robotics and physical intelligence.

Michael Rubloff