RayGauss: Ray Tracing Gaussian Part Two

Michael Rubloff

Michael Rubloff

Aug 8, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
RayGauss
RayGauss

There is now a second paper exploring 3D Gaussian Ray Tracing and thus the second departure from Splatting that has come out of research.

Just under a month ago was the announcement of NVIDIA's 3DGRT method and it didn't take long to see RayGauss. In both papers, we see stronger baseline results than with Gaussian Splatting and while these two papers diverge in strategy from one another, both represent the first forray into a new style of Radiance Field. It's also likely that both were developed concurrently and I would curious to see what mutual benefit could be gleaned from comparing the two.

The core idea of RayGauss is to decompose radiance fields into elliptical basis functions, each optimized for position, orientation, and scale. This decomposition allows RayGauss to finely adapt to the scene's geometry, overcoming the resolution limits imposed by voxel grids and other fixed-structure methods.

To represent color variations, RayGauss employs a combination of Spherical Harmonics (SH) and Spherical Gaussians (SG). This approach effectively captures both low-frequency phenomena, like broad color gradients, and high-frequency details, such as specular highlights, enabling a more accurate representation of the scene's visual characteristics.

A significant challenge in implementing RayGauss was to develop an efficient ray casting algorithm capable of handling irregularly spaced Gaussians. Traditional ray casting involves integrating radiance fields along a ray, which becomes complex when dealing with sparse, non-uniform distributions of Gaussians. RayGauss addresses this by using a slab-by-slab integration method. Instead of processing each sample along a ray individually, it accumulates color and density properties across multiple samples within a slab of space. This not only reduces computational overhead but also avoids common artifacts like flickering, which plagues Gaussian splatting.

The implementation leverages a Bounding Volume Hierarchy (BVH) to manage and accelerate ray-ellipsoid intersections. By building the BVH with ellipsoidal bounds, RayGauss can efficiently determine which primitives (elliptical Gaussians) influence a given ray, enabling fast and accurate color computations.

All of this results in higher visual fidelity than Gaussian Splatting, but does come at a cost.

The training times are quite a bit larger, with the synthetic scenes taking 30 minutes to train and the Mip-NeRF 360 dataset taking two and a half hours. For me personally, I don't mind longer training times with the result of higher fidelity outputs, but that won't be for everyone. They also train on a single 4090, meaning that this should be available to consumer grade GPUs.

While both 3DGRT and RayGauss push the boundaries of real-time rendering and novel view synthesis, they do so with different priorities. 3DGRT emphasizes efficiency and the integration of advanced lighting effects, making it a strong candidate for interactive applications. RayGauss, on the other hand, prioritizes rendering quality and photorealism, making it better suited for scenarios where visual fidelity is paramount.

Both methods leverage Gaussian-based representations but differ in their core methodologies and applications. On the performance side, the two methods diverge a bit, with 3DGRT's frame rates landing a bit higher, in the 55-190fps range, compared to RayGauss, which hits 25fps. The trade off here is that RayGauss's visual fidelity looks to be a little stronger.

The authors have stated that the code will be released, but there is no timeline for that currently. Only time will tell if we continue to see more Ray Tracing based approaches, but I have a feeling we will be seeing more sooner than expected. For more comparisons between RayGauss and Gaussian Splatting, check out their Project Page!

Featured

Recents

Featured

Trending articles

Trending articles

Trending articles

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Aug 26, 2024

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Aug 26, 2024

Platforms

Scaniverse 4 Announced

Gaussian Splatting is front and center in the newest version of Niantic owned Scaniverse.

Michael Rubloff

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff