What Is a Radiance Field?
The category of 3D scene representation reshaping how we capture and view the world.
A radiance field is a 3D representation of a scene that captures not just shape and color, but how light behaves at every point — including reflections, highlights, and subtle shading shifts that change with viewing angle. Where a traditional 3D model stores surfaces and textures, a radiance field stores the full appearance of a scene from every possible viewpoint. The result is a virtual scene that looks photographic from any direction. Radiance fields are a category, not a single technique. The two most widely used methods are Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), with several other implementations — Plenoxels, SVRaster, Triangle Splatting, Radiant Foam, and 3D Gaussian Ray Tracing — taking different paths to the same goal.
The Short Version
If you've ever looked at a 3D model in a video game and noticed it doesn't quite look real — flat lighting, missing reflections, materials that don't change as you move — you've felt the limit of how 3D has worked for decades. A radiance field solves that. The simplest way to think about it: imagine a 3D photograph you can walk around inside. The reflections on a glass tabletop shift as you move past it. The highlight on a car's hood travels with your eye. The leaves of a plant catch light differently from each angle. The scene doesn't just have shape — it has appearance, the way reality does.
Radiance fields are produced by training a computer to reconstruct that appearance from a set of ordinary 2D photos. You give the system 50 photos of a place from different angles. The system figures out the geometry, the lighting, and how the appearance changes from any new angle you might want to view from. The output is a file you can open in a viewer and explore as if you were there.
That's the entire idea. The technical depth comes later — but if you remember just one thing, remember that a radiance field is what you get when 3D representation finally catches up to how photographs actually look.
How Radiance Fields Work
Every radiance field, regardless of which method created it, encodes the same core information: for any point in 3D space, looked at from any direction, what color and what density does the scene have at that point?
This is sometimes called a 5D function, because it takes five inputs — three for the spatial location (X, Y, Z) and two for the viewing direction. The output is a color value and a density value. Render enough of those points along the rays of a virtual camera, and you get an image that captures the scene from a new viewpoint that wasn't in the original photographs.
The practical pipeline for building one looks roughly the same across methods:
- Capture. A user takes a series of photos or a video of a scene from multiple angles. The same input that works for traditional photogrammetry works for radiance fields.
- Structure from Motion (SfM). Software like COLMAP or RealityCapture analyzes the photos and figures out where each one was taken from, producing a sparse 3D point cloud as a starting reference.
- Optimization. The chosen radiance field method (NeRF, Gaussian Splatting, etc.) iteratively refines its representation by comparing rendered views against the original photos. The training signal is simple: "make the rendered view look more like this real photo." Over thousands of iterations, the representation converges.
- Output. A file containing the trained representation. For a NeRF, this is the weights of a neural network. For Gaussian Splatting, it's a list of 3D primitives. For other methods, it's something else again — but functionally, it's the radiance field, ready to be rendered from any new viewpoint.
The differences between methods come down to how they store and render the radiance field — not what a radiance field fundamentally is. A NeRF stores the field implicitly inside a neural network. Gaussian Splatting stores it explicitly as millions of 3D ellipsoids. Each approach has consequences for speed, file size, editability, and rendering hardware — but the underlying object being represented is the same.
The Methods That Build Radiance Fields
Several distinct methods reconstruct radiance fields. They differ in how they store the scene and how they render new views, but they all produce the same underlying object — a 3D representation that captures view-dependent appearance.
Neural Radiance Fields (NeRFs) were introduced in 2020 by Mildenhall, Srinivasan, Tancik, Barron, Ramamoorthi, and Ng at UC Berkeley. NeRFs use a neural network to model the scene implicitly. To find out what a point in space looks like from a particular angle, you query the network with a 5D coordinate, and it returns a color and density. The scene is encoded entirely in the network's weights. NeRFs are the original radiance field method and remain a core area of research, especially for hybrid approaches.
3D Gaussian Splatting (3DGS) was introduced in 2023 by Kerbl, Kopanas, Leimkühler, and Drettakis. It won Best Paper at SIGGRAPH 2023. Where NeRFs use a neural network, Gaussian Splatting represents the scene explicitly — as millions of small 3D ellipsoids called Gaussians, each with position, size, color, and orientation. This explicit representation is the unlock that made real-time rendering of radiance fields practical, including on mobile devices. Gaussian Splatting is the implementation that drove radiance fields from research curiosity to production tool. To go deeper, see What Is Gaussian Splatting?
Instant-NGP is NVIDIA's research project that introduced multi-resolution hash encoding, dramatically accelerating NeRF training and rendering. Today, Instant-NGP is the standard NeRF implementation, offering fast training and real-time rendering. It's not a separate method so much as an optimized realization of the NeRF approach.
3D Gaussian Ray Tracing was unveiled by NVIDIA Research in July 2024. It keeps the explicit Gaussian representation of 3DGS but renders it with ray tracing instead of rasterization. This unlocks effects that pure rasterization-based 3DGS struggles with — refractions, mirrors, depth of field, and the use of fisheye lenses during capture.
Voxel-based methods (Plenoxels and SVRaster) take a third path, representing the scene as a sparse 3D grid of voxels rather than as a neural network or as primitives. Plenoxels, introduced by Fridovich-Keil, Yu, and colleagues at UC Berkeley in 2022, represents the scene as a sparse voxel grid with spherical harmonics, optimized through gradient descent without any neural networks. It demonstrated that NeRF-quality results could be achieved roughly two orders of magnitude faster than vanilla NeRF training. SVRaster (Sun, Choe, Loop, Ma, and Wang at NVIDIA, CVPR 2025) extends the voxel approach with adaptive multi-level voxels and a custom rasterizer using direction-dependent Morton ordering, which avoids the popping artifacts sometimes seen in Gaussian Splatting and integrates naturally with classical mesh-extraction techniques like Marching Cubes and TSDF-Fusion. Both methods make the case that view-dependent radiance field rendering doesn't require neural networks or primitive-based representations to be competitive.
Triangle Splatting (Held, Vandeghen, Deliege, and colleagues, 2025) is a recent method that uses differentiable triangle primitives instead of Gaussian ellipsoids. Each triangle has three learnable 3D vertices, color, opacity, and a smoothness parameter, optimized end-to-end through gradient descent. The result is mesh-pipeline compatible: the optimized "triangle soup" can be loaded directly into any standard mesh-based renderer or game engine, achieving over 2,400 FPS at 1280×720 in a game engine on an RTX 4090. Triangle Splatting bridges classical computer graphics and modern differentiable rendering in a way that's particularly attractive for VR, real-time graphics, and integration with established 3D pipelines.
Radiant Foam (Govindarajan, Rebain, Yi, and Tagliasacchi at Simon Fraser, UBC, Toronto, and Google DeepMind, ICCV 2025) takes yet another approach. The scene is represented as a polyhedral mesh — analogous to a closed-cell foam, where each "bubble" emits view-dependent radiance. The method uses real-time differentiable ray tracing rather than rasterization, achieving Gaussian-Splatting-level quality and speed without rasterization's approximations. Like 3D Gaussian Ray Tracing, Radiant Foam handles light transport effects such as reflections and refractions natively, but it does so without requiring specialized ray tracing hardware.
Hybrid methods are increasingly common. Google's RadSplat combines NeRF-level fidelity with 3DGS rendering speed, hitting roughly 900 frames per second. This kind of hybrid work is a sign of the category maturing — the methods are converging on shared problems and borrowing each other's strengths.
The takeaway: radiance fields are not a single technique. They're a category of 3D scene representation, and the field of methods within that category continues to grow. NeRFs and Gaussian Splatting are the most widely deployed today, but they almost certainly won't be the last. The category is what matters — the specific implementations are tools for working within it.
Radiance Fields vs. Traditional 3D Models
If you've worked with 3D before, the natural question is: how do radiance fields differ from the meshes, textures, and materials that have defined 3D for decades?
The short answer: traditional 3D models represent the surface of a scene. Radiance fields represent the appearance of a scene.
A traditional 3D model is built from triangles arranged in 3D space, with image textures wrapped around them and material properties (shininess, roughness, transparency) applied to control how they respond to light. Lighting is calculated at render time by simulating how virtual light sources interact with the materials. This works well for objects designed in software, but it struggles with the complexity of real-world appearance: fine geometric detail, the interplay of materials, subtle subsurface scattering, the way light actually behaves in a real environment.
A radiance field skips that abstraction. It directly stores how the scene looks, including all the view-dependent effects that materials are an attempt to simulate. The reflection on a polished tabletop isn't computed from a "shininess" parameter; it's part of the captured representation. The result is a scene that looks like it was photographed from any new angle, because in a sense, every angle was photographed.
This is why radiance fields excel at photorealism but have historically struggled with editability. A traditional 3D model is a description; you can change the description. A radiance field is more like a recording; changing it is harder. That gap is closing — methods like Gaussian Splatting are explicitly designed to be editable, and mesh extraction from radiance fields (via 2DGS, RaDe-GS, Gaussian Frosting, Texture-GS, and others) lets you bridge into traditional pipelines when needed.
Many professional workflows now combine the two: photogrammetry or LiDAR for measurement-grade geometry, radiance fields for visual realism. The methods are complementary, not competing.
Why Radiance Fields Matter
The fastest way to understand why radiance fields matter is to see what people are building with them. The technology has moved from research curiosity to production tool in under five years, and it now ships in places you've already seen.
You can capture a place with the phone in your pocket. Apps like Niantic's Scaniverse, Luma AI, KIRI Engine, and Gaussian SplatKing (built by RadianceFields' founder) produce shareable radiance field reconstructions from a few minutes of walking around a space. The capture happens on-device for many workflows; no cloud round-trip required.
You're already using radiance fields inside Google Maps. Google's Immersive View uses radiance field techniques (including Gaussian Splatting) to let you fly through cities that were previously flat satellite imagery. If you've used Immersive View on a coffee shop or landmark, you've experienced a radiance field without realizing it.
They're reshaping mapping and surveying. Esri, the largest GIS company in the world, has integrated radiance fields into its mapping pipeline. The same technology underpinning consumer apps is now being adopted by professional surveyors, architects, and engineers who need photorealistic 3D representations of real places.
They're production tools for media, entertainment, sports, and robotics. Volumetric capture studios, autonomous vehicle simulation, NFL replays, music videos by major artists. The applications keep widening as the tooling matures.
You can view them in VR. Apple Vision Pro, Meta Quest, and Pico headsets all support radiance field playback. The viewing experience approaches "photographic memory of a place" — and unlike traditional 360° video, you can actually move around inside the scene, with reflections and lighting that respond to your viewpoint the way reality does.
The deeper reason radiance fields matter: they're a serious candidate for the next dominant medium of visual documentation. Photography captures one viewpoint. Video captures a sequence of viewpoints. A radiance field captures every viewpoint, with lighting that responds to the viewer the way the real world does. As capture and viewing tools mature, the question isn't whether this becomes a routine way to document places, products, and events — it's when.
Common Misconceptions
Three claims about radiance fields come up regularly. All three are wrong.
"NeRFs and Gaussian Splatting are competing technologies."
They're not. They're both radiance field methods, encoding the same kind of information through different mechanisms. NeRFs use neural networks; Gaussian Splatting uses explicit 3D primitives. The most interesting recent research combines them — Google's RadSplat is a NeRF/3DGS hybrid hitting roughly 900 frames per second while preserving NeRF-level fidelity. The two methods are converging, not displacing one another.
"A radiance field is just a 3D scan."
The terminology overlaps in casual conversation, but the technical reality differs. A traditional 3D scan produces a mesh — a representation of where surfaces are. A radiance field produces a representation of how a scene looks, including view-dependent effects that meshes can only approximate. Some workflows use both, but they answer different questions: a mesh is a description of geometry; a radiance field is a description of appearance.
"You can't get a mesh from a radiance field."
You can. Mesh extraction is an active research area, and methods like 2DGS, Neuralangelo, RaDe-GS, Gaussian Frosting, and Texture-GS extract meshes from radiance fields with growing reliability. The output may not match high-end photogrammetry yet, but the gap is closing rapidly. For most workflows that need both photoreal appearance and accurate geometry, hybrid pipelines combining radiance fields with photogrammetry or LiDAR are now the practical answer.
How to Get Started With Radiance Fields
The shortest path from curiosity to a working radiance field is three steps: capture, train, and view. Which tools you use depends on what method you want to work with and what hardware you have.
Step 1: Capture.
The minimum requirement is a series of photos or a video of a scene from multiple angles. The same capture technique that works for traditional photogrammetry — overlapping shots, varied heights, full coverage of the subject — works for radiance fields. For a small object, 50–100 photos from a circle around the subject is plenty. For a room or outdoor space, plan for full coverage at multiple heights.
If you want the fastest possible path, mobile capture apps handle steps 1 and 2 together. Niantic's Scaniverse processes the radiance field on-device for free, as does Gaussian SplatKing; Luma AI and KIRI Engine offer cloud-based pipelines with desktop and mobile inputs.
Step 2: Train (if you're not using an all-in-one app).
If you're capturing with a dedicated camera or working with footage outside a mobile app, you'll train the radiance field on a desktop. Different tools support different methods — Postshot, LichtFeld Studio, and Brush focus on Gaussian Splatting. Nerfstudio supports both NeRFs and Gaussian Splatting through different pipelines.
The single most important hardware decision is your GPU. We've published a practical buying guide for GPUs in radiance field work that covers the tradeoffs by budget.
Step 3: View, edit, share.
Once you have a trained radiance field, you can view it in a browser viewer, edit it, embed it in a site, or load it into a game engine. SuperSplat and Gauzilla Pro handle viewing and editing for Gaussian Splatting. Blurry, Splat Labs, and Reflct handle hosting, sharing, and embedding. Radiance fields also load into Unreal Engine, Unity, Three.js, React Three Fiber, Babylon.js, Spline, and Blender via plugins.
For VR viewing, MetalSplatter and Spatial Fields bring radiance fields to Apple Vision Pro; Meta Quest and Pico headsets have their own playback options.
For the full directory of every tool working in radiance fields, see the Radiance Field Ecosystem.
Where to Follow What's Next
The pace of research and tooling around radiance fields is fast. New papers ship weekly, established tools update monthly, and entirely new methods emerge quarterly. RadianceFields.com tracks the space across news coverage, the platform directory, and the View Dependent podcast.
For the latest on what's shipping right now, see the most recent coverage on the homepage. For company and tool details, see the platforms directory.
Frequently Asked Questions
What is a radiance field, in plain language?
A radiance field is a 3D representation of a scene that captures not just shape and color, but how light behaves at every point — including reflections and shading that change with viewing angle. The simplest way to think about it: a 3D photograph you can walk around inside, with lighting that responds the way reality does.
Is Gaussian Splatting a radiance field?
Yes. 3D Gaussian Splatting is a radiance field representation. The full title of the original paper is 3D Gaussian Splatting for Real-Time Radiance Field Rendering. It encodes the same properties as a Neural Radiance Field — color, density, and view-dependent appearance — using explicit 3D primitives instead of a neural network.
Are NeRFs the same as radiance fields?
Not exactly. NeRFs are one method for building radiance fields, specifically the original neural-network-based approach introduced in 2020. Radiance fields are the broader category that NeRFs belong to. Other methods in the same category include Gaussian Splatting, Plenoxels, SVRaster, Triangle Splatting, Radiant Foam, and 3D Gaussian Ray Tracing.
What's the difference between a radiance field and a 3D model?
A traditional 3D model represents the surface of a scene — meshes of triangles wrapped in textures, with material properties that simulate how light interacts with them. A radiance field represents the appearance of a scene directly, including all the view-dependent effects (reflections, highlights, subtle shading shifts) that materials are an attempt to approximate. Radiance fields tend to look more photorealistic; traditional 3D models tend to be easier to edit. Hybrid pipelines that use both are increasingly common.
Do I need a GPU to view radiance fields?
For training, yes — a modern consumer GPU is the practical floor. For viewing, increasingly no: mobile devices, tablets, and even modest laptops can render radiance fields in real time through optimized viewers. For a deeper breakdown, see our GPU buying guide.
Can I get a mesh from a radiance field?
Yes. Mesh extraction is an active research area, with methods including 2DGS, Neuralangelo, RaDe-GS, Gaussian Frosting, and Texture-GS. SVRaster integrates natively with classical mesh-extraction techniques like Marching Cubes. The mesh quality may not yet match high-end photogrammetry for measurement-grade applications, but the gap is closing rapidly.
Can radiance fields handle moving objects?
Yes, though dynamic radiance fields are still an area of active research. 4D variants — radiance fields that change over time — exist for both NeRFs and Gaussian Splatting. The applications include video, virtual reality experiences, sports broadcasts, and visual effects. Static scenes still represent the most mature workflows, but dynamic content is moving fast.
Is Gaussian Splatting going to replace NeRFs?
No. Both methods are active areas of research, and the most interesting recent work combines them. Google's RadSplat is a NeRF/3DGS hybrid that hits roughly 900 frames per second while preserving NeRF-level fidelity. NeRFs and Gaussian Splatting solve overlapping but distinct problems — they're converging, not competing.