Niantic Spatial Releases Major Scaniverse Update

Michael Rubloff

Michael Rubloff

Apr 7, 2026

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

What began as a consumer scanning app, Scaniverse, has steadily evolved into something far more foundational. Today, Niantic is introducing a new chapter for its spatial stack, positioning Scaniverse not just as a capture tool, but as the front door to a broader system designed to map, understand, and localize within the physical world. Just over a year ago, Niantic sold much of their IP, opting to focus on Niantic Spatial and PhysicalAI.

Rather than a standalone mobile app, Scaniverse is now positioned as an integrated platform spanning mobile capture and a new web based workspace. From a single scan, or multiple collaborative captures, users can generate visual positioning maps, meshes, and gaussian splats within the same system.

The updated platform supports capture from standard smartphones as expected for free, but now extends to 360-degree camera inputs for reconstructing larger environments on their paid tier, beginning at $20 per month. It can ingest .insv from Insta360 cameras natively. For other 360 cameras, such as the DJI Osmo360, Ricoh Theta, and GoPros, users can upload the stitched, equirectangular .mp4 file. On the web side, teams can upload, process, and inspect scenes directly in the browser that scale to industrial sites and city scale environments.

Assets can be downloaded as meshes in FBX or as splats in PLY and SPZ, Niantic’s open source Gaussian splat compression format.

Alongside Scaniverse, Niantic is rolling out VPS 2.0, a significant update to its visual positioning system. Where earlier versions required pre-scanned environments, VPS 2.0 extends localization globally, even in areas that have not yet been explicitly mapped. In mapped environments, it delivers centimeter level 6DoF localization. Outside of them, it augments GPS with improved heading and positional accuracy, particularly in dense urban or indoor settings where satellite signals degrade.

Gaussian splats and meshes provide the geometry and visual fidelity, while VPS provides the spatial grounding. Niantic is positioning this stack as part of a “Large Geospatial Model,” a persistent, machine readable representation of the physical world. The Niantic Spatial Development Kit 4.0, expected next month, will unify access across Unity, Swift, Android, and ROS 2, connecting directly into Scaniverse and VPS. The inclusion of ROS 2 signals a deeper push into robotics, where many companies are turning their attention towards.

Robotics systems operating in GPS-denied environments, construction and energy sites requiring shared spatial context, and large venues building persistent location aware experiences all stand to benefit from tighter integration between capture, reconstruction, and positioning.

The Scaniverse experience itself looks quite different now. In addition to doing local capture and reconstruction, the app now allows you to capture or upload information. It's important to note that after your data has been uploaded, Scaniverse will run a quick quality control check, prior to reconstruction beginning. Once that completes, the user will have to select the capture again to begin processing. Processing times will vary on the size and scope of a scene, but for me, I found that scenes would be returned in in a couple of hours.

What was once an endpoint, a reconstructed model on your phone, is now an input into a continuously evolving spatial system. Multiple users can contribute to a shared environment over time, with data fused into a unified model that updates as new captures are added.

In many ways, this reflects a broader shift happening across the radiance field ecosystem. The conversation is moving beyond how we reconstruct scenes toward how those scenes are indexed, localized, and made usable in PhysicalAI applications.

Learn more on their website.