Slang.D: Gaussian Splatting Rasterizer

Michael Rubloff

Michael Rubloff

Sep 26, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

It’s no secret that over the course of the last year, no paper has fascinated the world of Radiance Fields more than 3D Gaussian Splatting. In that span, we have also received a couple additional papers from the 3DGS first author, George Kopanas, who now works at Google. We now are seeing his latest efforts, which is a Slang.D implementation of the CUDA accelerated rasterizer within Gaussian Splatting, but what does that mean for developers and researchers? 

Slang.D is an open-source language designed to unify high-performance rendering pipelines. Traditionally, developers had to write separate code for different platforms, CUDA, Vulkan, OptiX, etc.—which increases both development time and the chances for bugs. Slang.D solves this by offering a single codebase that can compile across these platforms, reducing complexity and making it easier to manage rendering pipelines.  

One of the most compelling features of Slang.D is its ability to forego the manual implementation of the backward pass— a crucial step in differentiable rendering tasks. Traditionally, when rendering is integrated with machine learning, the backward pass (which computes gradients) is a complex process that requires developers to either manually implement or heavily rely on pre-existing tools. Slang.D changes this by automating the backward pass through its compiler, similar to how frameworks like PyTorch automatically handle gradient computations.

So, how is it possible for Slang.D to skip the backward pass without compromising efficiency? It’s a combination of clever compiler optimizations and the fact that Slang.D allows to mix auto-differentiated functions with manually written gradients which allows the programmer to do clever algebraic tricks to make the gradient computation as efficient as the manually hand-written code. Without these optimizations, the compiler would fall back to a naive method of back-propagation, leading to inefficiencies in memory and computation.

By allowing developers to bypass the tedious work of manually coding backward passes while still ensuring optimal performance, Slang.D offers the potential to significantly boost productivity for researchers and developers. However, it also maintains the flexibility and memory efficiency necessary for high-performance tasks.

In a discussion on the Slang.D GitHub repository, a question was raised about the possibility of using Slang.D with Metal, Apple’s GPU programming framework. The answer is both simple and nuanced. In theory, Slang.D supports multiple frameworks, and CUDA is just one of them. This means that if you already have a rendering framework that uses D3D or Metal, you could potentially re-use the same Slang shaders with some adjustments. However, in practice, when writing Slang.D code as part of a PyTorch program, CUDA is currently the only supported backend. So while re-using shaders in a Metal-based framework is possible, it requires some effort to compile Slang for these environments.

Amazingly, George has not only made this effective with the original INRIA implementation that he and the rest of the original 3DGS authors created, but also extended its usefulness to the open-source community through Nerfstudio's gsplat.

Slang.D’s rasterizer already supports features from gsplat, but there are additional rendering features Kopanas hopes to target in future releases. While advanced techniques like Markov Chain Monte Carlo (MCMC) and Bilateral Guided Radiance Fields are already compatible with Slang, George is looking at more convenient rendering options, such as:

  • Depth Rendering: This would allow the rasterizer to render depth images efficiently.

  • Camera Batching: Enabling batch processing of camera inputs, which could significantly speed up training and inference processes for models that rely on multiple camera perspectives.

One of the most common concerns when adopting a new rendering framework is memory usage. The good news is that Slang.D's VRAM consumption is comparable to the original INRIA and gsplat implementations. The compute patterns are designed to be almost identical, ensuring that there are no surprises when switching to Slang.D. This was done deliberately to demonstrate the flexibility of the Slang language, showing that it can replicate the functionality of existing CUDA-based implementations—minus the complexity of manual backward pass coding.

You might think that to facilitate VRAM matching, some efficiency has been sacrificed. Fortunately, that is not the case here. Training times remain relatively similar to those of the original implementations, proving that the performance of Slang.D is on par with its CUDA counterparts.

The potential for research acceleration with Slang.D is exciting, and it will be fascinating to see how its adoption evolves within the Radiance Field community. The code to get started can be found here

Featured

Recents

Featured

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff