Research

RealmDreamer's Generative Scenes

Michael Rubloff

Michael Rubloff

Apr 11, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

Since the unveiling of the Sora's large-scale generative Radiance Fields, the tech world has been buzzing with anticipation about the future of 3D scene generation. There hasn't been much public work since then showcasing what could be coming, but today we're looking at RealmDreamer, which creates scene level generations based on original text prompts.

You have to imagine that they're using some kind of radiance field method and they are. They use 3D Gaussian Splatting, but the process begins with the generation of a reference image from a text prompt, followed by the use of a monocular depth model to create an initial 3D point cloud.

More specifically, not wanting to start off on rocky beginnings, they initialize the scene using a pretrained 2D prior at a predefined pose, or in other words, a generated 2D image at a specific place within a larger (eventual) scene. In the examples shown, they are either using Stable Diffusion XL, Adobe Firefly, or DALLE-3. They then use a monocular depth estimator combination of Marigold and DepthAnything to convert it to 3D.

These two together help inform the initial point cloud. However, from this, there is a wide variety of potential outputs and scenarios that can be created. This means they also have the possibility of creating a bad initial starting point. They raise that initial floor, by outpainting the original generated image to extend the original point cloud view points. They use Stable Diffusion to continue to fill in discovered gaps. This step is critical to the success of RealmDreamer.

The final touch in RealmDreamer’s process is a finetuning phase that sharpens details and ensures the cohesiveness of the scene. This phase uses a text-to-image diffusion model personalized for the input image, ensuring that every aspect of the scene—from the textures to the lighting—matches the original textual prompt.

You might be interested to know that the inpainting and the finetuning stages are conducted within Nerfstudio. They use the original implmentation of Gaussian Splatting from Inria, but perhaps this could be officially supported within Nerfstudio in the coming months? In theory, you could generate an entire scene using RealmDreamer and then potentially build individual elements with SigNeRF, all without ever leaving the Nerfstudio ecosystem.

End to end it takes roughly 10 hours to run the entire generation, but encouragingly it can be accomplished on a single GPU with 24GB of VRAM, opening the door for the upper end of consumer GPUs. Like with all of Generative AI, it will only rapidly progress in both fidelity and speed. This seems like a good starting place for scene level generations. I also imagine that as diffusion based methods increase in fidelity and view consistency, 3D will be a direct benefactor, allowing for more and more hyper realistic outputs.

Check out the RealmDreamer's Project Page for more information and even more examples!

There's so much happening right now and while the majority of generative 3D has been on individual objects thus far, I would not discount the progress that is being made publicly through research and quietly behind closed doors.

Featured

Featured

Featured

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff