What Is Gaussian Splatting?
A definitive guide to the radiance field representation reshaping 3D capture.
3D Gaussian Splatting (3DGS) is a radiance field representation that reconstructs photorealistic 3D scenes from ordinary 2D photos. The simplest way to think about it: a Gaussian Splat is like a 3D photograph you can walk around inside, capturing not just what a place looks like from one angle but how it looks from every angle, with reflections and lighting that change as you move. Where Neural Radiance Fields (NeRFs) use a neural network to model a scene, Gaussian Splatting represents the scene explicitly — as millions of small 3D ellipsoids, each with position, size, color, and orientation. Both are radiance fields. Gaussian Splatting is the implementation that made real-time rendering of those fields practical, on hardware as small as a phone.
What Can You Do With Gaussian Splatting Today?
The fastest way to understand Gaussian Splatting is to see what people are already building with it. The technology has moved from research curiosity to production tool in under three years, and it now ships in places you've already seen — even if you didn't know the name.
You can capture a place with the phone in your pocket. Apps like Niantic's Scaniverse, Luma AI, KIRI Engine, and Gaussian SplatKing (built by RadianceFields' founder) produce shareable Gaussian Splats from a few minutes of walking around a space. The capture happens on-device; no cloud round-trip required for many workflows.
You're already using it inside Google Maps. Google's Immersive View uses Gaussian Splatting (and related radiance field techniques) to let you fly through cities that were previously flat satellite imagery. If you've used Immersive View on a coffee shop or landmark, you've used Gaussian Splatting without realizing it.
It's reshaping how mapping and surveying work. Esri, the largest GIS company in the world, has integrated Gaussian Splatting into its mapping pipeline. The same technology underpinning consumer apps is now being adopted by professional surveyors, architects, and engineers who need photorealistic 3D representations of real places.
It's a real production tool for media and entertainment, sports, robotics, and more. From volumetric capture studios to autonomous vehicle simulation to NFL replays, Gaussian Splatting is now embedded in production pipelines across industries. The applications keep widening as the tooling matures.
You can view and share splats in VR. Apple Vision Pro, Meta Quest, and Pico headsets all support Gaussian Splatting playback through various apps. The viewing experience approaches "photographic memory of a place" — and unlike traditional 360° video, you can actually move around inside the scene.
Quick reference: tools by job. If you want to start exploring Gaussian Splatting hands-on, here's where to look:
- Capture from a phone or camera: Scaniverse, Luma AI, KIRI Engine, Gaussian SplatKing
- Train on your own computer: Postshot, LichtFeld Studio, Brush, Nerfstudio
- Edit and clean up splats: SuperSplat, Gauzilla Pro
- Host, share, and embed: Blurry, Splat Labs, Reflct
- Use inside game engines or web frameworks: Unreal Engine, Unity, Three.js, React Three Fiber, Babylon.js, Spline
For the full list of every meaningful platform working in Gaussian Splatting, see the Radiance Field Ecosystem directory.
First, What Is a Radiance Field?
A radiance field is a way to represent a 3D scene that captures not just shape and color, but how light behaves at every point — including how appearance changes with viewing angle. Reflections on a glass tabletop, highlights on a car's hood, the way a leaf looks slightly different when you tilt your head: traditional 3D models struggle with these effects, but they're native to a radiance field.
The result is a scene you can move through that looks photographic from any angle, because the underlying representation models the same view-dependent effects your eyes see in the real world.
Radiance fields are a category, not a specific technique. Several methods reconstruct them, each with different tradeoffs:
- Neural Radiance Fields (NeRFs) — introduced in 2020 by Mildenhall, Srinivasan, Tancik, Barron, Ramamoorthi, and Ng at UC Berkeley. Uses a neural network to model the scene implicitly. The original radiance field method.
- 3D Gaussian Splatting (3DGS) — introduced in 2023 by Kerbl, Kopanas, Leimkühler, and Drettakis. Represents the scene explicitly with 3D Gaussians. Won Best Paper at SIGGRAPH 2023.
- Other approaches — methods like Plenoxels, SVRaster, Radiant Foam, Trilinear Point Splatting, and 3D Gaussian Ray Tracing take different paths to the same goal, prioritizing different tradeoffs around speed, quality, and rendering technique.
The radiance field family isn't fixed. New representations continue to emerge, each making different bets about how to encode a scene and how to render it. NeRFs and Gaussian Splatting are the two most widely adopted today, but they're unlikely to be the last.
Both NeRFs and Gaussian Splatting begin the same way: with a set of 2D photos taken from different angles, processed through a Structure from Motion (SfM) algorithm that estimates camera positions and produces a sparse 3D point cloud. From there, the methods diverge — but they converge on the same goal, which is reconstructing a radiance field that can be viewed from any new angle.
The full title of the original Gaussian Splatting paper is 3D Gaussian Splatting for Real-Time Radiance Field Rendering. Gaussian Splatting isn't an alternative to radiance fields. It's a method for rendering them in real time.
How Gaussian Splatting Works
The core idea is simpler than it sounds: instead of describing a 3D scene with triangles and textures (like a traditional 3D model) or with a neural network you have to query (like a NeRF), Gaussian Splatting describes the scene with millions of small, soft 3D ellipsoids called Gaussians.
Each Gaussian carries a few properties — where it sits in space, how big it is, which way it's oriented, what color it is, and how transparent it is — and the scene as a whole is just the sum of all those Gaussians overlapping in 3D.
To render a new view of the scene, the renderer projects every Gaussian onto the 2D plane of the camera, sorts them, and blends them together based on their depth and properties. Because this is fundamentally a rasterization process — the same kind of operation a GPU performs billions of times per second to render video games — Gaussian Splatting can render in real time. That's the unlock.
The training pipeline:
- Capture. A user takes a series of photos or a video of a scene from multiple angles. The same input that works for traditional photogrammetry works here.
- Structure from Motion (SfM). Software like COLMAP or RealityCapture analyzes the photos, figures out where each one was taken from, and produces a sparse 3D point cloud as a starting reference.
- Gaussian initialization. The sparse points become the seed positions for the initial Gaussians.
- Optimization. The Gaussians are progressively refined — repositioned, resized, recolored, split, or removed — by comparing renders of the current Gaussian cloud against the original photos. The training signal is simple: "make the rendered view look more like this real photo."
- Output. A
.plyor.splatfile containing millions of optimized 3D Gaussians. This is the splat.
The whole training process typically takes minutes to tens of minutes on a consumer GPU, depending on scene complexity and target quality — fast enough that iterating on a capture is realistic.
How Gaussian Splatting Relates to NeRFs (And Why That Framing Matters)
The most common question about Gaussian Splatting is whether it has "replaced" NeRFs. The framing is wrong.
Gaussian Splatting and NeRFs are both radiance field representations. They encode the same kinds of information — color, density, view-dependent appearance — but they differ in how that information is stored.
- NeRFs use an implicit representation. The scene lives inside a neural network. To find out what a point in space looks like, you query the network with a 3D coordinate and a viewing direction, and it returns a color and density. The scene is encoded in the network's weights.
- Gaussian Splatting uses an explicit representation. The scene is a list of 3D Gaussians with concrete properties. To find out what a point looks like, you check which Gaussians overlap that point and combine them.
That single difference cascades into everything else:
| Property | NeRFs | Gaussian Splatting |
|---|---|---|
| Representation | Implicit (neural network) | Explicit (3D primitives) |
| Training time | Fast | Fast |
| Rendering speed | Real-time | Real-time, including on mobile |
| File format | Network weights | .ply or .splat, list of primitives |
| Editability | Difficult — scene is baked into weights | Direct — primitives can be moved, deleted, recolored |
| Mesh extraction | Possible (Neus, Neuralangelo) | Possible (2DGS, RaDe-GS, Gaussian Frosting, Texture-GS) |
NVIDIA's Instant-NGP is the standard NeRF implementation today, offering fast training and real-time rendering. Each approach has strengths the other doesn't — NeRFs handle certain hard cases like view-dependent reflections on complex materials with elegance that explicit representations have to work harder to match. Gaussian Splatting wins decisively on editability, ease of integration into existing 3D pipelines, and out-of-the-box mobile performance.
The most interesting recent work is hybrid. Google's RadSplat combines NeRF quality with 3DGS rendering speed, hitting roughly 900 frames per second while preserving NeRF-level fidelity. NVIDIA's 3D Gaussian Ray Tracing keeps the explicit Gaussian representation but renders it with ray tracing instead of rasterization, enabling effects (refractions, mirrors, fisheye lenses) that pure rasterization-based 3DGS can't easily produce.
The takeaway: Gaussian Splatting didn't kill NeRFs. The two methods are converging, hybridizing, and pushing each other forward. And as SVRaster, Radiant Foam, and other new representations show, the radiance field family will keep growing.
How Gaussian Splatting Differs from Photogrammetry
If you've worked with 3D capture before, you've probably used photogrammetry — the process of taking many photos of an object or scene and reconstructing a textured 3D mesh from them. Photogrammetry is the established workflow in fields like surveying, archaeology, AEC, and visual effects, and it's been refined for decades.
Gaussian Splatting and photogrammetry start from the same input — multiple photos of the same scene — but they produce fundamentally different outputs.
- Photogrammetry produces a mesh and texture. The output is geometry: triangles, UV-mapped textures, and material maps. It's a model of the surface of the world. You can drop it into any 3D pipeline that accepts meshes.
- Gaussian Splatting produces a radiance field. The output is a volumetric representation of how the scene looks, including view-dependent effects like reflections and highlights that meshes can only approximate.
That difference matters because the two formats are good at different things.
Photogrammetry wins when you need accurate geometry — measurements, intersections, simulation, integration with CAD or BIM workflows. A mesh is a precise description of where surfaces are.
Gaussian Splatting wins when you need photorealistic visuals — capturing how a place actually looks, with all the subtle lighting and view-dependent detail a mesh would lose. It also tends to require fewer input photos to produce a usable result, and the result is typically ready to view in real time without further processing.
For many professional workflows, the answer isn't either-or. Hybrid pipelines that combine LiDAR or photogrammetry for measurement-grade geometry with Gaussian Splatting for visual realism are increasingly common. Mesh extraction techniques like 2DGS, RaDe-GS, Gaussian Frosting, and Texture-GS continue to narrow the gap between the two formats.
The 2023 SIGGRAPH Paper That Changed 3D Capture
The 2023 paper 3D Gaussian Splatting for Real-Time Radiance Field Rendering by Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis won Best Paper at SIGGRAPH 2023. That recognition reflected something the field had been chasing for years: a radiance field method that was both high-quality and fast enough to be practical.
Two breakthroughs made it work.
Fast training. A scene that previously took hours to reconstruct could now train in minutes on a consumer GPU. That changed who could use the technology — from research labs to hobbyists with gaming PCs — and it changed the iteration loop. Mistakes in capture became cheap to fix.
Real-time rendering. Once a Gaussian Splat is trained, it renders fast enough to view in real time, including on mobile devices. That unlocked an entire category of applications — VR experiences, web embeds, mobile capture-and-view loops, real-time editing — that simply weren't viable with earlier radiance field methods.
The combination is what shifted the technology from "promising research" to "production tool" in under three years. Within months of the paper's release, Gaussian Splatting was shipping in apps from Luma AI, Polycam, and KIRI Engine. Within a year, it was integrated into Unreal Engine, Unity, Three.js, React Three Fiber, Spline, Babylon.js, and Blender. Today, every major company offering radiance field tools supports Gaussian Splatting alongside (or instead of) NeRFs.
The deeper reason it matters: Gaussian Splatting is a serious candidate for the next dominant medium of visual documentation. Photography captures one viewpoint. Video captures a sequence of viewpoints. A radiance field captures every viewpoint, with lighting that responds to the viewer the way the real world does. As capture and viewing tools mature, the question isn't whether this becomes a routine way to document places, products, and events — it's when.
Common Misconceptions
Three claims about Gaussian Splatting come up regularly. All three are wrong.
"Gaussian Splatting isn't a radiance field."
It is. The full title of the original paper is 3D Gaussian Splatting for Real-Time Radiance Field Rendering. Gaussian Splatting encodes the same properties a NeRF encodes — color, density, view-dependent appearance — using explicit primitives instead of a neural network. The category is "radiance field"; Gaussian Splatting is one implementation within that category.
"Gaussian Splatting replaced NeRFs."
It didn't. NeRFs and Gaussian Splatting are both active areas of research, and the most interesting recent work combines them. Google's RadSplat is a NeRF/3DGS hybrid that hits roughly 900 frames per second while preserving NeRF-level fidelity. NVIDIA's 3D Gaussian Ray Tracing keeps the Gaussian representation but adds ray tracing for effects like refractions and mirrors. The two methods are converging, not competing.
"You can't get a mesh from a Gaussian Splat."
You can. Methods like 2DGS, RaDe-GS, Gaussian Frosting, and Texture-GS extract meshes from Gaussian Splats with growing reliability. The output may not match the precision of a high-quality photogrammetry mesh — yet — but mesh extraction from radiance fields is a fast-moving research area, and the gap is closing.
How to Get Started With Gaussian Splatting
The shortest path from curiosity to a working Gaussian Splat is three steps: capture, train (or skip training, depending on the tool), and view. Which tools you use depends on what you have and what you want to do.
Step 1: Capture.
The minimum requirement is a series of photos or a video covering the scene from multiple angles. The same capture technique that works for photogrammetry — overlapping shots, varied heights, full coverage of the subject — works for Gaussian Splatting. For a small object, 50–100 photos from a circle around the subject is plenty. For a room or outdoor space, plan for full coverage at multiple heights.
If you want the fastest possible path, mobile capture apps handle steps 1 and 2 together. Niantic's Scaniverse processes the splat on-device for free, as does Gaussian SplatKing; Luma AI and KIRI Engine offer cloud-based pipelines with desktop and mobile inputs.
Step 2: Train (if you're not using an all-in-one app).
If you're capturing with a dedicated camera or working with footage outside a mobile app, you'll train the splat on a desktop. Postshot, LichtFeld Studio, Brush, and Nerfstudio all run locally on consumer GPUs. Training time depends on your hardware, scene complexity, and target quality, but a typical capture trains in minutes to tens of minutes.
The single most important hardware decision is your GPU. We've published a practical buying guide for GPUs in radiance field work that covers the tradeoffs by budget.
Step 3: View, edit, share.
Once you have a .ply or .splat file, you can view it in a browser viewer, edit it, embed it in a site, or load it into a game engine. SuperSplat and Gauzilla Pro handle viewing and editing. Blurry, Splat Labs, and Reflct handle hosting, sharing, and embedding. Splats also load directly into Unreal Engine, Unity, Three.js, React Three Fiber, Babylon.js, Spline, and Blender via plugins.
For VR viewing, MetalSplatter and Spatial Fields bring splats to Apple Vision Pro; Meta Quest and Pico headsets have their own playback options.
For the full directory of every tool working in Gaussian Splatting, see the Radiance Field Ecosystem.
Where to Follow What's Next
The pace of research and tooling around Gaussian Splatting is fast. New papers ship weekly, established tools update monthly, and entirely new platforms launch quarterly. RadianceFields.com tracks the space across news coverage, the platform directory, and the View Dependent podcast.
For the latest on what's shipping right now, see the most recent coverage on the homepage. For company and tool details, see the platforms directory.
Frequently Asked Questions
Is Gaussian Splatting a radiance field?
Yes. Gaussian Splatting is a radiance field representation. The full title of the original paper is 3D Gaussian Splatting for Real-Time Radiance Field Rendering. It encodes the same properties as a Neural Radiance Field — color, density, and view-dependent appearance — using explicit 3D primitives instead of a neural network.
How is Gaussian Splatting different from NeRFs?
Both are radiance fields. The difference is how the scene is stored. NeRFs use an implicit representation — the scene lives inside a neural network, queried per point. Gaussian Splatting uses an explicit representation — the scene is a list of millions of 3D Gaussians with concrete properties. That single difference cascades into faster rendering, easier editing, and broader pipeline integration for Gaussian Splatting.
Do I need a GPU for Gaussian Splatting?
For training, yes — a modern consumer GPU is the practical floor. For viewing, increasingly no: mobile devices, tablets, and even modest laptops can render Gaussian Splats in real time through optimized viewers. For a deeper breakdown, see our GPU buying guide.
Can I get a mesh from a Gaussian Splat?
Yes. Methods including 2DGS, RaDe-GS, Gaussian Frosting, and Texture-GS extract meshes from Gaussian Splats. The mesh quality may not match high-end photogrammetry yet, but the gap is closing fast.
Is Gaussian Splatting open source?
The original Gaussian Splatting code from Inria and the MPI is open source. So are several training implementations including Brush, LichtFeld Studio, and Nerfstudio's Splatfacto. Many commercial platforms layer proprietary features on top of this open foundation.
Can I use Gaussian Splatting in Unreal, Unity, Blender, or on the web?
Yes to all four, with growing support. Unreal Engine has multiple plugin options including those from Volinga, Luma, and others. Unity supports splats through plugins. Blender has GSOPs and several other integrations. On the web, Three.js, React Three Fiber, Babylon.js, Spline, and PlayCanvas all render Gaussian Splats natively or through libraries.
How big are Gaussian Splat files?
Typical splats range from tens of megabytes to a few hundred megabytes, depending on scene size and quality. Compression and streaming techniques are an active area of work — Cesium, Khronos, and others are shipping infrastructure for level-of-detail and progressive loading that will reduce these sizes meaningfully over time.