

Michael Rubloff
Apr 10, 2025
I’ve often wondered: can imaging truly break free of 2D constraints? Is it possible for a 3D image output to be so strong that from any angle, it would be indistinguishable from a still 2D photo? Three years ago, I was surprised to find that the answer to that question was yes, at least for static objects thanks to Neural Radiance Fields. Upon realizing this, my head swirled, thinking of all the possibilities that a fundamentally new imaging medium would bring. I couldn’t stop thinking how the ways we interact with moments in time have been shaped by 2D.
NVIDIA allowed me to write about some of these uses in 2023, where I captured my friends and family in lifelike, static 3D.
There was so much to explore with freezing a moment in time in 3D, that I didn’t even consider the future of video.
However, I found myself underestimating the progress of technology yet again, when I saw the team at Infinite Realities putting forth lifelike dynamic captures. That is hyper-real 3D videos. I had barely wrapped my head around the possibility of reconstructing the static world in 3D when I reached out to them, awestruck, and wrote up an article about some of the projects they were working on, hoping that one day, I myself would be captured.
A little over a year later, I found myself in London to moderate Third Dimension’s exciting launch event. I was moderating a conversation with some leading computer vision experts, like 3D Gaussian Splatting co-first author George Kopanas and Third Dimension’s CEO, Tolga Kart.
The day following, I made my way two hours away into the UK countryside to Ipswich, to a place I had continued to think about, Infinite Realities. I tend to believe I am a radiance field enthusiast, both in evangelism and as a capturer too. In the past four years, I have taken over six thousand captures. However, the total shooting experience I have pales in complete comparison to Lee and Henry's work at Infinite Realities.
Arriving at their headquarters, there was so much to see. They have compiled an incredible collection of demos and stories that we spoke for a couple of hours before even getting into the purpose of the trip.
It’s hard to describe the feeling of stepping in front of 176 cameras and 484 lights surrounding you. At first, the experience was overwhelming: the stage flooded with lights, every camera calibrated to fire at the exact same millisecond. Especially with the knowledge that every wasted second was equivalent to 127 gigabytes. The sheer volume of content captured, especially when compared to our traditional expectations from photos or videos is staggering. Their reconstruction pipeline? A heavily calibrated, structured, and scalable “flipbook” sequence processing of per-frame Gaussian Splats.
In this 30-second clip, over 300,000 still images of me were captured, a number that far exceeds all the images taken of me in my entire life. While this realization is surreal, I view it as a natural progression of technology. What might feel strange or unnatural today is merely a stepping stone toward the future. I do not believe we should let this past expectation guide the potential of the future.
Today, we have the means to reconstruct the dynamic world around us— albeit currently within state of the art facilities like Infinite Realities. Research papers might suggest that consumer level fidelity is still a distant dream, and for the most part, that’s true. Yet, compute requirements have recently plummeted, while GPU manufacturers like NVIDIA continue to make astonishing strides in both hardware and neural rendering. Companies such as Meta are further lowering the barrier to entry with headset hardware and demos like Hyperscape, which leverage Radiance Fields to create lifelike, immersive experiences.
We are excited to announce the release of a free VR demo in Unity, allowing anyone to experience this technology firsthand. Additionally, we are publicly releasing the set of 1,800 trained PLY files for further experimentation, in addition to 300K 2D images. To learn more about Infinite Realities, their services, or potential collaborations, please reach out to them here.
I am eternally grateful to Lee and Henry for allowing me to experience this and hopefully show that we are already much further along to extending our memories and digital lives than people realize.
The rate at which technology is progressing from the sidelines is genuinely unbelievable and if not for the simultaneous progress of LLMs and Diffusion video models, I believe that this vertical would be receiving significantly more attention. Despite the progress of these adjacent technologies, I believe that Radiance Fields and the advancement of lifelike 3D should be on the forefront of our minds, with potentially the easiest to recognize impact of the three.
The initial move from 2D photography to 2D video took almost seventy years. Through applied research and effort, I believe that we can accelerate the advancement and adoption of lifelike 3D into the world around us. There can be a straightforward pipeline to the rise of a new imaging medium that will reshape and extend ways to interact with the world.
Imaging has chronicled the human experience for thousands of years, evolving gradually over time. In this context, 2D was never meant to be the final frontier in capturing our shared moments. Today, we stand closer than ever to realizing lifelike 3D imaging, with the foundational technologies now in place. While my generation was the first to have our lives documented online via social media, I am thoroughly convinced that soon, future generations will be the first to experience their lives captured in 3D.