LinPrim: Linear Primitives for Differentiable Volumetric Rendering
Michael Rubloff
Jan 29, 2025
Radiance field representations have experienced explosive growth over the last five years. Now, researchers from the Technical University of Munich have introduced a compelling new radiance field approach called LinPrim, or Linear Primitives for Differentiable Volumetric Rendering. Developed by Nicolas von Lützow and Matthias Niessner, LinPrim leverages simple yet powerful geometric primitives, octahedra and tetrahedra, for differentiable volumetric rendering. The method promises high-fidelity reconstructions and a framework that is both intuitive and integrative with traditional 3D graphics workflows.
LinPrim takes a different approach from Gaussian Splatting by introducing explicit linear polyhedral primitives as a foundation for volumetric scene representation. These primitives offer bounded geometries that are easier to manipulate and more aligned with traditional graphics techniques. LinPrim’s use of bounded polyhedral primitives challenges the conventional reliance on continuous or Gaussian-based representations in NVS.
LinPrim uses two types of geometric primitives: octahedra (shapes with eight triangular faces) and tetrahedra (four-sided triangular pyramids). These primitives serve as the building blocks for reconstructing 3D scenes. Each primitive is defined by its position, rotation, vertex distances, opacity, and view-dependent color.
Octahedra leverages inherent symmetries to minimize the number of parameters required to describe their geometry. For example, the distances to opposing vertices are shared, ensuring stability and reducing computational complexity. Tetrahedra, while slightly more complex, also follows a simplified parameterization scheme.
The rendering process uses a GPU-optimized, fully differentiable rasterizer. This enables gradient-based optimization, allowing primitives to adjust their features (e.g., geometry, position, and color) to match input images.
Despite being highly expressive, LinPrim achieves interactive frame rates by relying on straightforward geometric calculations, such as ray-triangle intersections. Additionally, they are also able to reduce memory consumption by using fewer primitives to represent the scene while maintaining similar visual fidelity to Gaussian Splatting.
Its workflow consists of three key stages: primitive initialization, differentiable rendering, and optimization. LinPrim begins by using Structure-from-Motion (SfM) to extract and initialize 3D points from input images. Each SfM point is used to initialize a primitive (octahedron or tetrahedron) with a fixed position, size, and rotation. The primitives are processed to determine how they interact with rays cast from a virtual camera. Ray intersections are calculated with the triangular faces of the primitives and used to compute opacity and color.
The process is fully differentiable, allowing errors in rendered images to propagate back to the primitives' geometric parameters. Using gradient descent, LinPrim adjusts the positions, shapes, and colors of its primitives to minimize differences between rendered and input images. Additionally, the system dynamically adds or removes primitives to ensure an optimal balance between scene complexity and fidelity. LinPrim also incorporates advanced anti-aliasing techniques to reduce visual artifacts.
In terms of rendering speed, LinPrim can achieve real-time performance, hitting frame rates of up to 68 frames per second for octahedra-based scenes and 175 fps for tetrahedra-based scenes on an NVIDIA RTX 3090 GPU. The tests occurred on the ScanNet++ & Mip-NeRF 360 datasets. While there’s not an explicit mention of how much VRAM LinPrim consumes, the fact that it can fit on consumer cards bodes well for the new Radiance Field method.
There’s no word yet on a potential code release or licensing information, but we will be closely monitoring the paper. We’re entering an exciting period where I believe we will see more novel radiance field methods. The original paper can be found here.