Radiance Meshes for Volumetric Reconstruction

Radiance Meshes for Volumetric Reconstruction

Radiance Meshes for Volumetric Reconstruction

Michael Rubloff

Michael Rubloff

Dec 4, 2025

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Radiance Meshes
Radiance Meshes

The jury is still very much out on the defining radiance field representation. For the past couple of years, Gaussian Splatting has surged in popularity because of its ease of use, fast rendering rates, and lifelike fidelity. A new paper, Radiance Meshes, is now making a compelling case that it deserves to be considered a major contender.

Produced by a collection of familiar authors, ranging from NeRF, to Gaussian Splatting, and EVER, it proposes a different approach. Instead of representing a scene as Gaussians, voxels, or neural grids, treat it as a tetrahedral mesh whose cells each carry a tiny, linearly varying volume of color and density. The result is a radiance field that speaks the native language of GPUs, triangles, while retaining the view dependent effects and continuity of volumetric rendering.

The authors frame the problem in familiar terms. Neural radiance fields remain the conceptual benchmark for quality, but they incur heavy computation and don’t map cleanly onto existing graphics hardware. Gaussian splatting exploded in popularity precisely because it leaned into the GPU’s strengths: simple primitives rendered very quickly. Yet despite this, there are still things that can be improved.

Last year my cohost, MrNeRF and I spoke to Jon Barron about Radiance Fields on The View Dependent Podcast and he highlighted a missing key quadrant for radiance field representations that Radiance Mesh seems to fill.

Radiance meshes attempt to resolve this tension by stepping sideways. Instead of point primitives or neural grids, the method begins with a sparse point cloud and performs a Delaunay tetrahedralization, partitioning space into a mesh of tetrahedra whose circumspheres contain no other points. Each of these little volumes becomes a cell of the radiance field. Constant density, linearly varying color, and fully defined entry and exit points for exact integration of the volume-rendering equation.

Because these tetrahedra are true mesh primitives, they can be sorted front to back using a classic but under appreciated property of Delaunay geometry, the “power” of each circumsphere relative to the camera origin. This ordering supports exact visibility, even for fisheye cameras or heavily distorted lenses. Once sorted, the tetrahedra can be fed directly into the hardware triangle rasterizer, blended front to back, and integrated analytically inside the fragment shader. No splat approximations. No sorting errors. No temporal popping.

At comparable primitive counts, radiance meshes render faster than Gaussian Splatting, by roughly a third at HD resolutions, while eliminating the view dependent popping. The authors also introduce a mesh shader implementation that avoids redundant primitive loads and pushes frame rates further, bringing real time volumetric rendering of full scenes into reach on consumer GPUs. In parallel, a hardware accelerated ray tracer achieves a notable speed bump over Radiant Foam, despite offering exact visibility and significant improvements in numerical robustness.

Because radiance meshes are true semi-transparent triangle meshes, they can be dropped directly into existing graphics tooling. They deform naturally under physics engines like XPBD. They also support fisheye ray generation and non pin-hole cameras during training. And because each tetrahedron carries a compact, interpretable volumetric density, it becomes easy to extract watertight surface meshes without post hoc marching cubes or neural thresholding tricks.

Qualitatively, the model sits in a sweet spot. It doesn’t match the absolute peak PSNR of 3D Gaussian Splatting, but it maintains stable, consistent imagery across views and does so at speeds that outpace splatting while offering a path to real time volumetric rendering on the web, on desktops, and, crucially, inside the workflows that artists and engineers actually use.

I will keep saying it, but imaging is going to shift into lifelike 3D. We keep marching directly towards this facilitation and papers like this help accelerate the transition.

The code for Radiance Meshes is now available and additionally comes with a web viewer and demo scenes.