Third Dimension Introduces SuperSim, Simulator Built From Reality

Third Dimension Introduces SuperSim, Simulator Built From Reality

Third Dimension Introduces SuperSim, Simulator Built From Reality

Michael Rubloff

Michael Rubloff

Dec 9, 2025

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
SuperSim
SuperSim

For years, robotics teams have lived with a longstanding limitation at the center of their workflows. They collect mountains of real world sensor data. Hours of video, LiDAR, IMU traces, GPS tracks and yet the simulators they train on bear little resemblance to the world those sensors captured. The gap between a robot’s lived experience and its synthetic training environment has always been obvious, but until recently, there wasn’t a clear alternative.

Third Dimension believes that this is the moment when the tools finally catch up. When the way robots see the world can finally be the way they train in it. Today, they are introducing their new platform, SuperSim, as a neural simulator built directly from reality itself. Rather than assembling simulation scenes with 3D artists, technical directors, and game engine tooling, SuperSim reconstructs the environment from raw logs and turns it into a high fidelity simulation within hours.

If radiance fields like NeRFs and Gaussian Splatting can rebuild the physical world with lifelike continuity, why shouldn’t robotics simulation follow the same trajectory?

Traditional simulators have helped robotics move beyond controlled environments, but their limitations have been well understood inside the industry. The first is the domain gap. Robots trained inside something that looks and moves like a video game tend to behave differently when confronted with glare, clutter, shadows, foliage, unmodeled occlusions, human environments that weren’t captured in the asset packs of game engines.

The second is the cost structure. Companies routinely invest 10+ person teams over months to hand author scenes that still feel uncanny. Updating a single asset, like changing a warehouse layout, modifying signage, adding a new shelf, can take weeks. And yet the real environment evolves daily and at scales that strain today’s tools. A single autonomous fleet can generate data on the order of petabytes. No hand authored simulator can realistically keep pace with that rate of change.

The third is the waste. Robots gather extraordinary volumes of spatial data in the field, but almost none of it gets pulled back into the simulator. The world outside the robot improves the world inside its models only sporadically, if at all.

SuperSim aims to bring immediate value to all that data. SuperSim’s pipeline starts with what the robot actually saw. Customers send sensor logs of video, LiDAR, IMU readings, positional traces. Third Dimension reconstructs the scene using a mixture of radiance fields, SLAM, SfM, and classical geometric alignment.

The resulting scene is derived from the real world. And because the system was built with robotics in mind, it doesn’t stop at reconstruction. SuperSim applies AI driven world modeling to generate novel views, enrich missing regions, and insert 3D objects for training and validation.

“We wanted to rethink simulation by taking advantage of the most valuable resource: reality itself. SuperSim reconstructs the real world without heavy production pipelines, and leverages world modelling to take it to the next level. It's exciting that the timing finally is right. Advances in radiance fields, AI, and GPU compute have made it possible to build simulations straight from reality, at speeds that match how robotic fleets operate.” - CEO, Tolga Kart.

The entire process cuts time down significantly compared to traditional modeling, with simulations rendering at above real time rates. Additionally performance continues to evolve rapidly. Third Dimension is already working with teams across autonomous vehicles, drones, and industrial robotics.

All of these scenes are fed to SuperSim as sensor clips. The platform rebuilds them, generalizes the environment, and returns a simulation that matches the textures, lighting, and clutter of the real world far more closely than anything authored in a traditional engine.

Most reconstructed clips so far span 30–60 seconds in length. Long enough to capture meaningful decisions, failures, and interactions that teams need to test repeatedly.

Teams already using Omniverse for visualization or synthetic data generation can treat SuperSim as a “clip to sim” bridge. Something that turns raw sensor logs into simulation ready environments without requiring asset libraries or 3D labor. SuperSim also takes simulation to another level by using AI-based world modelling to expand a single capture into thousands of variations, including generating novel views and inserting new 3D objects.

Customers are already describing levels of fidelity and dynamic accuracy they couldn’t reach in their previous simulators, and with significant speed ups for generation. In practice, SuperSim becomes a feedstock generator for robotics simulation stacks.

One of the more forward leaning experiments underway is cross platform sensor transfer. Because SuperSim’s reconstructions are not tied to a single sensor configuration, they can be re-rendered as if the robot had different cameras or LiDARs. For OEMs who operate fleets with mixed sensor suites, this could become a powerful tool for bringing together perception models and reducing the fragmentation that makes large scale deployment so difficult.

If 2023 and 2024 were the years radiance field methods proved their visual credibility, 2025 is becoming the year they demonstrate operational value. SuperSim is a product that ingests logs, reconstructs the world, and returns simulation scenes that robotics teams can use the same day. The simulator is no longer a handcrafted approximation of the world. It’s becoming an extension of the world itself.

And if these systems continue to shrink the distance between perception and simulation, robotics may be entering the first era in which simulators don’t just imitate the world, they originate from it. To learn more about SuperSim, visit Third Dimension’s website.

Featured

Recents

Featured

Platforms

Foretellix and Voxel51 Announce Partnership

It's been a busy day for AV simulation and radiance fields.

Michael Rubloff

Dec 9, 2025

Platforms

Foretellix and Voxel51 Announce Partnership

It's been a busy day for AV simulation and radiance fields.

Michael Rubloff

Dec 9, 2025

Platforms

Foretellix and Voxel51 Announce Partnership

It's been a busy day for AV simulation and radiance fields.

Michael Rubloff

Platforms

360 Extractor for 360 Data Preprocessing

A vibe coded desktop application for 360° video preprocessing and image masking.

Michael Rubloff

Dec 9, 2025

Platforms

360 Extractor for 360 Data Preprocessing

A vibe coded desktop application for 360° video preprocessing and image masking.

Michael Rubloff

Dec 9, 2025

Platforms

360 Extractor for 360 Data Preprocessing

A vibe coded desktop application for 360° video preprocessing and image masking.

Michael Rubloff

Platforms

Insta360 Releases 360 Drone, Antigravity A1

Insta360 has unveiled the world's first 8K 360 drone.

Michael Rubloff

Dec 4, 2025

Platforms

Insta360 Releases 360 Drone, Antigravity A1

Insta360 has unveiled the world's first 8K 360 drone.

Michael Rubloff

Dec 4, 2025

Platforms

Insta360 Releases 360 Drone, Antigravity A1

Insta360 has unveiled the world's first 8K 360 drone.

Michael Rubloff

Research

Radiance Meshes for Volumetric Reconstruction

A new radiance field representation has just been published!

Michael Rubloff

Dec 4, 2025

Research

Radiance Meshes for Volumetric Reconstruction

A new radiance field representation has just been published!

Michael Rubloff

Dec 4, 2025

Research

Radiance Meshes for Volumetric Reconstruction

A new radiance field representation has just been published!

Michael Rubloff