Inside Arcturus: How Radiance Fields Are Reshaping Volumetric Sports Broadcasts

Michael Rubloff

Michael Rubloff

Mar 30, 2026

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Arcturus Sports

For decades, volumetric sports capture has hovered out of reach. While the promise of allowing viewers to step into a moment of play from any angle, such as standing in the batter’s box, following a puck across the ice, is amazing, the reality has historically been far more complicated. Volumetric systems require camera arrays, controlled studio environments, and heavy post processing pipelines, and without the end product of lifelike 3D fidelity. However, recent advances in radiance fields have begun to change that.

Techniques such as Neural Radiance Fields (NeRFs) and, more recently, Gaussian Splatting have dramatically improved the realism scene reconstruction from multi camera video. What once required complex mesh pipelines can now be represented, capable of producing views that feel closer to video than traditional 3D.

One company pushing these capabilities directly into live sports production is Arcturus, a volumetric video platform that has been refining capture and reconstruction pipelines for years. Their work focuses on transforming multi camera footage from stadium environments into lifelike three dimensional replays that can be explored from virtually any perspective.

To better understand how radiance fields are influencing real-world production pipelines, I spoke with Devin Horsman CTO of Arcturus, and Steve Sullivan, the company’s CEO. Our conversation covered everything from the moment they first encountered NeRF research to the technical challenges of deploying volumetric capture systems inside live sports venues.

Do you remember when you first saw either NeRF or gaussian splatting? 

DH: I’d been working on volumetric video mesh compression, streaming, etc for years when I first came across the original NeRF paper and had my mind blown. Then after reviewing it carefully I could see that there would be quite some distance until it was useful in the kind of context our business operated in, so it was more of something to study than expand upon. Instant NGP and Gaussian Splatting papers covered the next steps, and we started seriously looking at radiance field solvers around the time of that paper. 

How have radiance fields transformed your business?

SS: Fundamentally, field solvers have crossed the threshold from “looks like computer graphics” to “looks like video” for most people, enabling a much wider consumer audience. The kind of artifacts that occur in areas of limited coverage are less objectionable, and the view dependent and transmission dependent effects really sell the reality of the scene.

What made Gaussian splatting a particularly strong fit for your use cases compared to other reconstruction approaches you had evaluated?

DH: It's a strong fit in that it can be rendered in real time on consumer grade hardware, and that any artifacts that do show up are less distracting than alternative approaches. Vanilla gaussian splatting isn’t a great fit due to the high number of cameras required, forced perspective effects that appear in solving, poor handling of noisy calibrations or uneven exposures, slow solve times and relatively large output files. But we’ve been working to overcome those one by one with our proprietary changes to the technique and by integrating some more recent results from the literature.

When you talk to leagues or broadcasters today, what’s the most common misconception they still have about volumetric capture and what usually changes their mind once they see the reconstructions?

SS: They often assume that we are morphing between images, or that we will only be able to get views from a perspective close to where we have placed cameras. Putting the cameras into the eyes of one of the contestants is usually an aha! moment. Or, they expect that the install footprint is huge, complicated, and will distract the live audience. In fact it has yet to have been noticed by the audience unless pointed out. Our cameras are around the size of a fist in most cases.

Some of the shots you’ve released, like perspectives from directly behind a catcher’s mitt, feel almost impossible from a traditional broadcast mindset. Without giving away secrets, what has to go right across capture, reconstruction, and synchronization to make something like that possible?

SS: A few examples: High quality calibration is a much bigger part of the puzzle than people may expect. There is very limited time to calibrate cameras prior to a match. You have limited options for creating features, and none of the standard approaches in literature are adequate. We have to assume cameras shaking or getting bumped at every frame.

DH: During the solve, some cameras may have better or worse quality data, and it can be important to sample that data preferentially depending on where in the solve you are.

SS: Framing and positioning views effectively becomes important with a small number of cameras as well.

When volumetric content reaches fans today, where do you see it creating the most real value, and where do you think that value will shift over time?

SS: Fans appreciate examining a situation that standard cameras can’t easily see, both to validate calls and to see how close something was to determining the game’s outcome. In terms of being able to tell a more personal story for a given athlete, there’s a lot more opportunity to zoom in and out on the context, with camera positioning that feels more personal and cinematic.

The website mentions that there is an Arcturus Unreal Engine plugin. Is that currently available?

DH: It is not currently generally available, but is available to value-add partners.

What sport are you personally most excited to see fully realized in lifelike 3D, and how far away do you think that experience actually is?

DH: I’m personally most interested in hockey, basketball, dance, figure skating. I think we will have demonstrated all of these in 2026.

SS: I agree, there’s huge potential for hockey to engage a whole new audience. It’s such a fast physical sport, and if you’ve never played it’s hard to appreciate. We can change that. It’s true for nearly any sport, we can help fans understand and appreciate what’s happening, great way to expand the fan base.

If you could instantly solve one problem across the entire pipeline, capture, reconstruction, fidelity, or delivery, which would unlock the biggest leap forward, and why?

DH: Just one problem? No fair! Bringing latency down to real-time without sacrificing quality is the biggest short term unlock; we’re working diligently on this and have already made order of magnitude improvements. Unfortunately YONO style splatting isn’t remotely close to ready for real-world calibration scenarios, but that route looks very promising. For now we’re focused on making iterative splatting very fast.

Learn more about Arcturus from their website here.