LGM: Prompt to 3D using Gaussians

Michael Rubloff

Michael Rubloff

Feb 12, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
LGM
LGM

The first six weeks of 2024 have seen remarkable advancements in the realm of converting prompts to detailed 3D models. Joining this innovative wave is the Large Multi-View Gaussian Model (LGM), which has set a new benchmark for high-resolution 3D content creation from text prompts or single-view images. Unlike many existing methods that rely on Neural Radiance Fields (NeRFs) as their backbone, LGM uses 3D Gaussians.

At the core of LGM's innovation is its utilization of multi-view Gaussian features, which capture and integrate information from multiple angles of an object. This multi-faceted approach ensures the creation of a richly detailed 3D Gaussian representation, setting LGM apart from NeRF-based methodologies.

The process leverages an asymmetric U-Net architecture, specifically designed to handle multi-view images by predicting and fusing Gaussians. This asymmetry allows for a higher resolution of input images and curtails the number of output Gaussians, optimizing the model's efficiency without compromising the quality of the 3D output.

Key to the U-Net's effectiveness is its unique ability to enhance information sharing across all input views, thanks to the integration of attention blocks within its deeper layers. These blocks ensure that the final 3D model is not just a collection of independent views but a harmonious synthesis that accurately reflects the object's form and details.

Another critical step in the process is differentiable rendering, which is applied to the fused 3D Gaussians. This technique allows for the generation of novel views from the 3D model, enabling end-to-end image-level supervision at high resolutions. This capability is essential for achieving the high fidelity and detail that LGM models are known for.

Addressing the practical application of these generated models, LGM proposes a bespoke algorithm for converting 3D Gaussians into smooth and textured meshes.

It further allows the LGM to maintain rapid generation speeds—capable of creating 3D objects within five seconds—while significantly enhancing the training resolution up to 512.

A standout feature of LGM is its innovative mesh extraction process, designed to convert the generated 3D Gaussians into practical polygonal meshes. How they do this though, might surprise you. Traditional methods often struggled with the sparsity of 3D Gaussians, resulting in meshes with visible imperfections. LGM overcomes this by adopting NVIDIA's Instant-NGP.

The process begins with the on-the-fly training of a NeRF model using images rendered from the 3D Gaussians, capturing detailed geometry and appearance. This NeRF representation is then converted into a coarse mesh using the Marching Cubes algorithm, which establishes the basic form of the model.

To achieve a high-quality mesh, LGM employs two hash grids that meticulously reconstruct the model's geometry and texture based on the Gaussian renderings. This approach allows for iterative refinement and the application of differentiable rendering to enhance the mesh's details and textures. The final step involves baking the appearance field onto the mesh, yielding a detailed, smooth, and textured 3D model ready for various applications.

Remarkably, this comprehensive mesh extraction process, from 3D Gaussians to a refined NeRF-based mesh, is completed in about a minute with optimized implementation.

There are currently a few different ways to test out LGM. Mr. For Example has already integrated LGM into Comfy 3D and Hugging Face has a demo space. There is also 3DTopia's implementation of LGM, but at the time of publishing there appears to be high traffic to the space, so it's taking a little longer to generate. /

LGM in Comfy3D

The progress of prompt to 3D continues to push forwards and we're seeing results that are both faster and higher fidelity. While testing LGM through Hugging Face didn't quite hit the 5-second generation mark, the output's fidelity in approximately 10 seconds remains impressive, keeping in line with Luma AI's Genie and VAST's Tripo.

Featured

Recents

Featured

News

NeRFs Nominated for Regional Emmy

Rent a tuxedo. NeRFs are going to the Emmys.

Michael Rubloff

Sep 12, 2024

News

NeRFs Nominated for Regional Emmy

Rent a tuxedo. NeRFs are going to the Emmys.

Michael Rubloff

Sep 12, 2024

News

NeRFs Nominated for Regional Emmy

Rent a tuxedo. NeRFs are going to the Emmys.

Michael Rubloff

Research

Quadrature Fields

This method from the 3DGS MCMC team pushes NeRF rendering rates well into real time and up to 500fps.

Michael Rubloff

Sep 11, 2024

Research

Quadrature Fields

This method from the 3DGS MCMC team pushes NeRF rendering rates well into real time and up to 500fps.

Michael Rubloff

Sep 11, 2024

Research

Quadrature Fields

This method from the 3DGS MCMC team pushes NeRF rendering rates well into real time and up to 500fps.

Michael Rubloff

Platforms

Agisoft Metashape adds COLMAP Export to Standard License

Another state of the art SfM method just got a bit easier to use.

Michael Rubloff

Sep 9, 2024

Platforms

Agisoft Metashape adds COLMAP Export to Standard License

Another state of the art SfM method just got a bit easier to use.

Michael Rubloff

Sep 9, 2024

Platforms

Agisoft Metashape adds COLMAP Export to Standard License

Another state of the art SfM method just got a bit easier to use.

Michael Rubloff

News

Introducing View Dependent: A New Podcast Exploring the Future of 3D Tech

Join hosts Michael and MrNeRF as they explore the groundbreaking advancements in Radiance Field technology, from NeRFs to Gaussian Splatting, with insights from leading engineers, researchers, and industry veterans shaping the future of 3D tech.

Michael Rubloff

Sep 9, 2024

News

Introducing View Dependent: A New Podcast Exploring the Future of 3D Tech

Join hosts Michael and MrNeRF as they explore the groundbreaking advancements in Radiance Field technology, from NeRFs to Gaussian Splatting, with insights from leading engineers, researchers, and industry veterans shaping the future of 3D tech.

Michael Rubloff

Sep 9, 2024

News

Introducing View Dependent: A New Podcast Exploring the Future of 3D Tech

Join hosts Michael and MrNeRF as they explore the groundbreaking advancements in Radiance Field technology, from NeRFs to Gaussian Splatting, with insights from leading engineers, researchers, and industry veterans shaping the future of 3D tech.

Michael Rubloff