TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering

Michael Rubloff

Michael Rubloff

Jan 12, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
TRIPS
TRIPS

Gaussian Splatting has been the hottest radiance field method for the past four months, since SIGGRAPH. Well, it seems like we might have a new radiance field method called TRIPS, Trilinear Point Splatting for Real-Time Radiance Field Rendering that might have something to say about that.

Developed by a team of researchers at Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany, TRIPS builds upon some of the methodology that 3D Gaussian Splatting put forth and increases fidelity, without sacrificing real time capability.

I am very, very excited about this new radiance field type. TRIPS leverages two distinctive methods, 3D Gaussian Splatting and Approximate Differentiable One-Pixel Point Rendering (ADOP) to generate the best from both. If you deviate too far away from the initial capture path with 3DGS, the scene doesn't always hold up, introducing blurriness and artifacts. Whereas, ADOP excels at mapping textures and gaps in space, but does come with a neural network overhead, albeit a smaller one.

TRIPS introduces a novel rasterization process where points from a point cloud are converted into a screen-space image pyramid. Unlike traditional methods that render points at a single resolution, TRIPS employs multiple layers of different resolutions, akin to a pyramid structure (remember PY-NeRF?)

The decision-making process about which pyramid layer to use for rasterizing each point shows off its advanced design. This decision is based on the projected size of the point in the screen space. Larger points are rendered on lower resolution layers, efficiently covering more space in the image pyramid. Conversely, smaller points are mapped to higher resolution layers, ensuring fine details are preserved. This strategic rasterization approach, particularly the use of 2×2×2 trilinear splats, showcases TRIPS's capability to handle diverse point sizes adeptly, without compromising on speed or image quality.

After the rasterization step, TRIPS employs a neural network to reconstruct a detailed, gap-free image. This step is pivotal in ensuring that the final rendered image is devoid of gaps and artifacts. The lack of a neural network is something that people love about Gaussian Splatting, but the network in TRIPS is not just any network. It’s specifically streamlined for efficiency, featuring gated convolutions and a self-bypass connection. This architecture allows the network to perform its complex task with minimal computational demand.

By efficiently managing the rasterization process through the image pyramid, TRIPS significantly reduces the workload on the neural network. The network, therefore, can focus more on refining image quality.

The entire rendering pipeline of TRIPS is differentiable, allowing for automatic optimization of both point sizes and positions, a feature that enhances accuracy and quality. By optimizing input parameters, the system creates robust scene representations that are true to the source material. Despite the complexity of the process, TRIPS maintains a real-time frame rate of 60 frames per second.

TRIPS's performance extends to challenging scenarios, including scenes with intricate geometry and expansive landscapes. One of the remarkable aspects of TRIPS is its ability to handle scenes with complex geometries and large-scale environments efficiently.

These results are impressive no doubt, but TRIPS does take a bit longer to train than 3DGS or Instant NGP. It takes roughly 2-4 hours to train, with an average render speed of 15ms on a 4090. Still that is impressive given the fidelity that is output.

Their Github is currently blank, except for a ReadMe, however a license is listed as MIT (!) We will have to see if whether or not that remains true as it is populated with the code.

It took 3D Gaussian Splatting several months after it was initially published prior to interest being piqued out of SIGGRAPH. While the paper has only newly been announced and the full video and code have not been released yet, on its face it represents an exciting new radiance field method. It goes without saying that we will be watching TRIPS carefully.

Featured

Recents

Featured

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff

Nov 20, 2024

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff

Nov 20, 2024

Platforms

RealityCapture 1.5 Released with Radiance Field and COLMAP Export

Transforms.json and COLMAP export have arrived for RealityCapture.

Michael Rubloff

Platforms

Meta Hyperscape now available on Quest 2 and Quest Pro

Meta's Radiance Field VR demo can now be experienced on the Quest 2 and Quest Pro.

Michael Rubloff

Nov 19, 2024

Platforms

Meta Hyperscape now available on Quest 2 and Quest Pro

Meta's Radiance Field VR demo can now be experienced on the Quest 2 and Quest Pro.

Michael Rubloff

Nov 19, 2024

Platforms

Meta Hyperscape now available on Quest 2 and Quest Pro

Meta's Radiance Field VR demo can now be experienced on the Quest 2 and Quest Pro.

Michael Rubloff

NeRFs win Two More Emmys

The Phoenix Suns 2023 Intro video was recognized at last night's event.

Michael Rubloff

Nov 19, 2024

NeRFs win Two More Emmys

The Phoenix Suns 2023 Intro video was recognized at last night's event.

Michael Rubloff

Nov 19, 2024

NeRFs win Two More Emmys

The Phoenix Suns 2023 Intro video was recognized at last night's event.

Michael Rubloff

Platforms

Snap Brings 3DGS Trainer into Lens Studio 5.3

With Len Studio 5.3, you can now train individual objects with 3DGS.

Michael Rubloff

Nov 17, 2024

Platforms

Snap Brings 3DGS Trainer into Lens Studio 5.3

With Len Studio 5.3, you can now train individual objects with 3DGS.

Michael Rubloff

Nov 17, 2024

Platforms

Snap Brings 3DGS Trainer into Lens Studio 5.3

With Len Studio 5.3, you can now train individual objects with 3DGS.

Michael Rubloff