
StorySplat has returned with its most significant update yet. After a long stretch without major releases, version 2.0 arrives as a complete rebuild. The platform has been rewritten from the ground up on the PlayCanvas Engine, bringing major improvements across the board.
For those unfamiliar with the platform, StorySplat has become one of the most accessible ways to transform a trained Gaussian splat into an interactive narrative experience. Creator uploads a splat, then builds a guided journey through the environment using camera viewpoints, hotspots, audio, and interactive elements.
With version 2.0, the technical foundations of that experience have been dramatically expanded, not to mention the new UI. The new PlayCanvas based architecture introduces streaming and compression techniques. Instead of loading entire environments at once, StorySplat now streams level of detail chunks as visitors explore a scene.
The platform now supports SOGs compression. Combined with web worker support in the editor, large environments can be edited while keeping the interface responsive even under heavy loads.
The StorySplat viewer itself now powers the editing environment. What creators see while editing is exactly what visitors will experience once the scene is published. This removes the need for preview modes and makes it easier to compose viewpoints, lighting, and interactions with confidence. Inline splat editing also allows creators to clean up stray gaussians or refine captures without leaving the platform, supported by a 50-step undo and redo history.
The release also introduces several capabilities that push StorySplat closer to a full interactive environment engine for radiance field scenes.
Scenes can now be relit dynamically using up to sixteen lights, with directional, point, and spot lighting. First-person walk mode has also been added, supported by automatic voxel-based collision generation. It appears to be the same implementation that PlayCanvas merged recently. Using GPU acceleration through WebGPU, the system converts splat geometry into a sparse voxel octree that approximates the environment’s physical structure, allowing visitors to move through complex scenes without falling through surfaces.
Another major addition is a connection system designed to link multiple splat environments together. Portals placed inside a scene can transport viewers into another capture, effectively allowing creators to build multi-room virtual spaces while loading only one splat at a time. A higher level feature called StoryChains can automatically group scenes into larger narratives with a single entry point.
Cinematic transitions and customizable themes further expand how these experiences can be presented. The new theme editor exposes dozens of properties that control the visual interface surrounding the scene, making it possible to tailor experiences to specific brands, publications, or educational contexts.
StorySplat scenes can now be embedded using an official NPM package, allowing developers to drop the viewer into React, Vue, Svelte, or vanilla JavaScript applications with a single function call. Scenes can also be exported as a self-hosted package and deployed independently from the StorySplat platform.
Support for dynamic splat content is also expanding. Version 2.0 introduces flipbook playback for 4D gaussian splats, opening the door to time varying volumetric scenes inside StorySplat narratives.
One of the cooler capabilities that is being introduced is instead of simple page views, StorySplat can now record and replay exactly how visitors move through a scene, offering creators a spatial understanding of how audiences explore their environments.
Additionally, students can apply for education accounts using a university email address.
It's been a while since their last update, but I was very excited to see the release and am excited to begin using it again! Learn more about StorySplat on their website.






