GaussMR at SIGGRAPH

Michael Rubloff

Michael Rubloff

Jul 29, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
GaussMR
GaussMR

In this insightful interview, we look into the creative and technical journey behind the development of GaussMR, which is currently on display at SIGGRAPH's art exhibition. Our guest, an experienced researcher and developer in the realm of neural view synthesis and interactive environments, shares their inspirations, challenges, and the innovative breakthroughs that led to GaussMR's inception. As a long-time enthusiast of neural radiance fields (NeRFs) and their potential applications in real-time environments, our guest was captivated by the Gaussian Splatting paper in 2023, which provided the missing piece to their vision.

With a focus on pushing the boundaries of AR/VR/MR technologies, this interview explores the motivation behind integrating Gaussian Splatting into highly interactive spaces, overcoming technical hurdles, and the implications for future developments in immersive content creation. Additionally, our guest's journey through multiple exhibitions at SIGGRAPH highlights the evolution of their work, from pioneering GPU particle systems to the innovative GaussMR.

Join us as we uncover the story behind GaussMR, a groundbreaking advancement poised to revolutionize interactive and immersive experiences in virtual and augmented reality.

How did you come up with the idea for GaussMR?

I’d been interested in neural view synthesis research like NeRF since around 2020, but it was difficult to use in real-time environments. Then in 2022, I read the Gaussian Splatting paper and thought this was what I had been looking for.

However, as I followed the papers succeeding Gaussian Splatting, I noticed that most of them were about speeding up training, improving accuracy, or expanding the range, but there were no papers about introducing it into interactive spaces like games. Which is why, I began to focus intensively on research to incorporate Gaussian Splatting into real-time, highly interactive AR/VR environments.

What inspired you to work on Gaussian Splatting and its application to interactive environments?

I’d been conducting research on improving immersion in AR/MR environments for a long time, and I felt that simple 3D objects and shaders seemed out of place and appeared fake when compared to completely realistic AR environments. I’d long thought that Neural Radiance Fields, which can accurately reconstruct real spaces, could be a means to solve this problem. And, when I read the paper on Gaussian Splatting, which allows for much faster rendering than NeRF, I felt like that was a better approach.

This is your sixth time exhibiting at SIGGRAPH. Congratulations! How does GaussMR build upon your previous exhibitions, such as Dimix and Garage2?

Garage/Garage2, which reconstructs real space based on GPU particles, forms an important foundation for my subsequent research. While Dimix applies Stable Diffusion to 360-degree images, the system can perform three-dimensional collision detections by reconstructing 3D positions from depth information based on the Garage system, allowing players to benefit from powerful 2D image generation while gaining a three-dimensional experience. GaussMR can perform fast and flexible processing with Compute Shaders by treating Gaussians as GPU Particles. In particular, the Fast Neighbor Search algorithm developed in Garage has been very helpful for fast Collision Detection and SDF Noise Reduction in GaussMR.

Also, in Neural AR, my SIGGRAPH research from 2019, I attempted to integrate Convolutional Neural Networks into AR environments in real-time. This was before the term "Generative AI" had become common, and I think it was a good approach to have been working on AI technology since then.

Your paper mentions the use of GPU particles and Signed Distance Fields (SDF). How do these components integrate to enhance real-time interactions and renderings?

I feel that GPU Particles hold great potential for use in XR spaces. GPU Particles enable complex collisions, destruction, and animations at a fine particle level, which is well-suited for XR spaces that require high interaction. In GPU Particles, complete parallel processing is possible by allocating 1 particle to 1 thread, and if you are skilled in handling GPUs, you can implement this at high speed.

SDFs are very powerful for performing more complex and accurate physical simulations after collisions and for smooth rendering using raymarching. The realistic space reproduced by Gaussian Splatting strongly matches the high expressive power of SDFs. In recent months, several papers have been published on generating the meshes offline and utilizing them in real-time environments. However, GaussMR, which constructs as well as utilizes the SDFs in real-time and also enables rich interactions, has a distinct advantage.

Generally, constructing SDFs is a relatively slow process, but this can be greatly accelerated using GPUs. Specifically, the Jump Flooding Algorithm [Rong and Tan. 2006] and the Parallel Banding Algorithm [Cao and Tang. 2014] are excellent parallel processing algorithms for SDF construction. GaussMR implements improved versions of these algorithms. Other than SDF construction, raymarching is also a fairly slow process, so I aim to explore more efficient rendering methods in the future.

Additionally, in areas where SDFs are not constructed or raymarching is not performed, all GPU Particles can maintain simple interactions, such as color changes and scaling animations. The flexibility to adjust how much area is covered by SDFs according to GPU resources is also an important feature of GaussMR.

How do you envision GaussMR being used in future VR/AR/MR content creation and gaming industries? How can people build upon GaussMR to build video game experiences, like this one on Twitter?

I think Gaussian Splatting itself will be very powerful in creating static objects for VR/AR/MR content. However, many game environments require high interactivity, so that's where GaussMR could be useful. In the future, I want to make it possible for developers who are interested in GaussMR to be able to utilize it. I'll answer this in more detail in a later question.

What are the potential future developments or extensions for GaussMR, such as 4D Gaussian Splatting or Text to Gaussian Splatting?

GaussMR is not locked into any specific Gaussian Splatting method; it is a versatile technology that can be applied to any method as long as the position information of the GPU particles is available. Since all processes, such as SDF Building and Collision Detection, are performed frame by frame, it is fully capable of adapting to 4D Gaussian Splatting, which is possibly the next direction I'm considering.

Will GaussMR be opened to the public post SIGGRAPH for people to try? If so, will people be able to use their own .plys?

Yes, we're planning to conduct a survey at SIGGRAPH to determine what format people would like to use, and then releasing it in an appropriate manner. People will likely be able to upload their own .ply files.

If there are requests to incorporate it into specific products, I might provide access to the code along with consulting and development support.

What were the main difficulties you faced while developing GaussMR and how did you overcome them?

In Gaussian Splatting, the space of Gaussians can become very sparse (especially for simple floors and walls with little detail). While this is desirable in terms of memory usage, in GaussMR, it can lead to cases where there are not enough seeds for building SDFs, and they cannot be constructed cleanly. Ideally, we'd like to interpolate the sparse areas, but since Gaussian Splatting usually doesn't have semantic information, it's difficult to determine which parts should be interpolated. Additionally, since Gaussian Splatting generally does not retain Normal Maps, it cannot accurately represent physically correct light interactions during color changes of GPU particles. Therefore, it is necessary to apply Gaussian Splatting methods capable of generating Normal Maps. This is something I’d like to challenge in the future.

Is it compatible with specific VR headsets?

Yes, but it is basically intended for use with high-performance GPUs. That means connecting to a PC using a headset with features similar to Meta Quest Link and rendering with a discrete GPU.

At SIGGRAPH, it will be exhibited using the Meta Quest Pro and the Meta Quest 3.

Does GaussMR also explore Frustum Culling with 3DGS?

Yes, a unique GPU Frustum Culling method is implemented. In general Gaussian Splatting, all Gaussian-related parameters (covariance, rotation, scale, etc.) are calculated on Compute Shaders before the Rendering Pipeline, but this becomes a waste of GPU computing resources for the Gaussians that aren't rendered. In GaussMR, we implement our own GPU Culling right before these Compute Shader calculations, achieving a significant performance boost.

What motivated you to pursue research in computer graphics and interactive techniques, and what has been your most rewarding experience so far?

There are many Japanese anime and manga that imagine a world with highly developed futuristic technology similar to the anime Psycho-Pass. I've always had a strong desire to realize such worlds if possible. This is also mentioned in the interview published on the SIGGRAPH Blog below, so please take a look.

https://blog.siggraph.org/2021/11/the-magic-behind-spatial-control-in-ar.html/

While I often gain wonderful experiences when solving technically difficult problems, the moments when users enjoy or are surprised by technologies and experiences I've created are what really motivate me. Getting positive feedback at a place like SIGGRAPH, where experts from all over the world gather is especially rewarding. This is why I continue to contribute to the SIGGRAPH community year after year.

Are there any particular researchers or projects that have significantly influenced your work on GaussMR?

I'm strongly attracted and committed to all GPU-related technologies. So I'm inspired daily by NVIDIA's research team. Papers like InstantNGP [Müller et al. 2022] and GANcraft [Hao et al. 2021] have been greatly inspiring to me as I aim to integrate AI technology into VR/AR/MR spaces in creative ways. Other than AI, I also always pay attention to excellent works on parallel processing algorithms like Maximizing Parallelism in the Construction of BVHs, Octrees, and k-d Trees [Karras 2012]. I also have respect for wonderful engineers in game development communities like Unreal Engine's TechArt community.

Featured

Recents

Featured

Trending articles

Trending articles

Trending articles

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Aug 26, 2024

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Aug 26, 2024

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff