Research

RefFusion: Inpainting with 3DGS

Michael Rubloff

Michael Rubloff

Apr 19, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
RefFusion
RefFusion

One of the interesting problems in 3D graphics is how to effectively add, remove, or alter elements within a scene while maintaining a lifelike appearance. We've seen advancements with several innovative papers addressing these issues, employing diffusion-based models to enhance scene realism and consistency.

A notable example is Google's ReconFusion, which utilizes NeRFs as its foundation. NVIDIA's recently announced RefFusion, however, takes a different approach by employing Gaussian Splatting.

Unlike systems that rely on text prompts for generating 2D references, RefFusion personalizes the process using one or multiple reference images specific to the target scene. These images not only guide the visual outcome but also adapt the diffusion model to replicate the desired scene attributes accurately.

RefFusion's process begins with the selection of a reference image—essentially what the user wants the scene to ultimately resemble. This image can either be generated or supplemented with additional real images that depict the desired output. RefFusion's inputs are a reference image, masks of the desired manipulation, and training views (contributed by Gaussian Splatting).

This method effectively tailors the generative process to specific scenes, significantly improving the base upon which RefFusion operates. By eliminating reliance on text guidance during the distillation phase, it empowers RefFusion use of multi-scale score distillation, which caters to both the larger global context of a scene, in addition to local, smaller details too.

The system uses the original dataset from the Gaussian Splatting scene to create a mask that identifies which parts of the scene require inpainting. This is determined by measuring each part's contribution to volume rendering within the scene. Parts that meet or exceed a certain threshold are included in the masking group.

In areas identified by the mask, RefFusion leverages information from the reference images to dictate how these regions should be filled. The system's adaptation of the diffusion model to the reference views ensures that the inpainted content matches the existing or desired scene elements with high fidelity. Moreover, depth prediction from monocular depth estimators is utilized to align the 3D information with the 2D inpainted results, further enhancing the scene's depth accuracy and visual coherence. After they finish labeling all the parts, they are rasterized them back into training views.

To blend the scene more seamlessly, RefFusion includes a final adversarial step, where a discriminator is used to mitigate color mismatches and artifacts. This step draws inspiration from GaNeRF, particularly in the hyper parameters used for the adversarial loss.

All of these processes lead to outputs that not only achieve high-quality inpainting results but also allow for capabilities like outpainting and object insertion.

Interestingly, there has been an uptick in generative Radiance Field methods recently. Today, another interesting development was announced with InFusion, indicating that in-painted Radiance Fields using diffusion are on the rise and serve as a legitimate way to fill in missing information or manipulate a scene effectively.

Featured

Featured

Featured

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

May 17, 2024

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

May 17, 2024

Research

CAT3D Pounces on 3D Scene Generation

We very recently were looking at RealmDreamer, which generates scenes from prompts. Just over a month later, CAT3D, short for "Create Anything in 3D," has emerged and takes things up a notch or two.

Michael Rubloff

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

May 15, 2024

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

May 15, 2024

Radiancefields.com launches Job Board

The latest feature has arrived onto the site and it's with the goal of connecting top talent to companies from newly launched start ups to the world's largest companies.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

May 14, 2024

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

May 14, 2024

Research

Tri-MipRF to Rip-NeRF

Tri-MipRF was one of the more underrated NeRF papers to be released last year. Now we're seeing a progression of the work Tri-Mip created with Rip-NeRF.

Michael Rubloff

To embed a website or widget, add it to the properties panel.

Featured

Featured

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Gaussian Splatting methods have continued to pour in over the first three months of the year. With the rate of adoption, being able to merge and compare these methods, shortly after their release would be amazing.

Michael Rubloff

Apr 8, 2024

Gaustudio

Research

Gaustudio

Michael Rubloff

Apr 8, 2024

Gaustudio

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

SplaTV

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Michael Rubloff

Mar 15, 2024

SplaTV

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

SMERF

Research

The MERF that turned into a SMERF

For the long time readers of this site, earlier this year, we looked into Google Research's Memory Efficient Radiance Fields (MERF). Now, they're back with another groundbreaking method: Streamable Memory Efficient Radiance Fields, or SMERF.

Michael Rubloff

Dec 13, 2023

SMERF

Research

The MERF that turned into a SMERF

Michael Rubloff

Dec 13, 2023

SMERF