Gaussian Splatting in Superman

Gaussian Splatting in Superman

Gaussian Splatting in Superman

Michael Rubloff

Michael Rubloff

Sep 4, 2025

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Superman Gaussian Splatting
Superman Gaussian Splatting

I walked into the theater already braced for it. Twitter had tipped me off that somewhere in the new Superman, VFX company, Framestore, had used Gaussian splatting. I warned my girlfriend I would be insufferable the moment it appeared, then settled in. I did not have to wait long. Within the first minutes, Superman’s biological parents materialize in front of us as volumetric messages. It is a classic sci-fi idea, except this time it looks photographic, not like a hologram gag. You accept it without thinking, which turns out to be the point.

“We did not want it to read as a typical three-dimensional hologram,” Framestore’s Montreal-based CG Supervisor Kevin Sears told me. “The intention was subtle and photographic.” The team had been exploring options for months with the client, looking for a way to ground an alien artifact in reality. Gaussian splats gave them a volumetric performance with real light, real texture, and just enough Kryptonian strangeness when the message glitches.

The path to that look started in Ipswich, at Infinite Realities. The same Infinite Realities that captured me last year. Early in 2024, Sears and production began testing with the Infinite Realities crew. Lee and Henry, the studio’s two-person whirlwind, ran point. Before the actors stepped into the rig, they pulled on white wardrobe, recreated the DP’s lighting, and hammered through rounds of tests. The brief was tricky. The Fortress of Solitude is white crystals in a white room. Jor-El and Lara wear flat white costumes. Reading depth against depth against depth would break most techniques. A mesh digi-double could have worked, but you would feel it. The faces would be a little off, the hair would need grooming work, and the whole thing would drift toward CG character land. The film needed a human performance that simply existed in space.

Infinite Realities captured Bradley Cooper and Angela Sarafyan with roughly two hundred cameras, then trained a single two-minute take into a dynamic splat sequence. Training ran on their GPUs for several days. Framestore reviewed, asked for a longer brute force pass, and blessed the result. What came back to Montreal was not a mesh at all, just a sequence of PLY files representing a living performance in 3D. No UVs, no topology. Pure radiance fields.

The moment you have that, the creative doors swing open. Sears’ small team pulled the sequence into Houdini using the GSOPs plugin, then started experimenting. Noise fields, transforms, rotations, anything that could move energy through the performance without breaking it. The show needed a “corrupted transmission” feel during key beats, but it could not devolve into a wavy screensaver. The solution was to segment the performance into packets that could destabilize independently. Think slices of a person that briefly slide, stutter, or misalign in 3D, then reassemble in time with the dialogue. Because the splats carry directional lighting, a rotated head still reads correctly. It feels like a real recording fighting through interference.

Despite the technology being so new, the Framestore and Infinite Realities team were able to forge a path. Houdini ran the creative, but Framestore leaned on a custom SIBR Viewer for sequence output. Infinite Realities provided the custom build with better anti-aliasing and command line control. The viewer could not translate splats inside the scene at the time, so Framestore inverted the problem. They exported tracked shot cameras from the plates, scaled and offset those into the splat’s world, and baked look files that lined everything up. From there, Sears launched renders directly, pushing tweaks like increased opacity to prevent faint values bleeding through from the back of a head. Output was straightforward: 4K, 8-bit PNG sequences with alpha, ready for comp.

Compositing happened in Nuke with a light touch. Denoise where the white on white made life hard, grade to lay the image into the set, add transparency in the right places. They started without any support, but as Irrealix’s Nuke’s splat plugin was released, the comp team began pulling depth natively from the splats. That killed an earlier idea to temporarily mesh the sequence just to get more depth cues. No need. The data was already there, which meant cleaner edges and better layering without betraying the method.

The creative freedom bought by a volumetric performance shows up in small ways all over the sequence. Wide lenses in this movie can stretch faces at the edges of the frame. On a set, you would be stuck. In a splat, you change the virtual focal length and reframe until the actor looks right. Editorial could trim, extend, or return to exact frames to hit specific sound cues. Iteration turned into muscle memory. “As soon as I had a camera exported, I could hit a button and have the render going,” Sears said. It never touched Framestore’s traditional path-traced renderers. It did not need to.

Cost and risk were part of the calculation. Gaussian splatting was the better choice for this sequence. Building a dialogue capable digi-double that holds up in closeup is expensive, and every tweak invites more notes. The mesh route would have pulled the team into weeks of likeness and hair work, with a long tail of iteration that adds up. In Ipswich, you are paying to move talent, run a highly calibrated rig, and train the data. After that, you are standing on solid ground. The client sees the actual human performance early, buys into the fidelity, and the conversation shifts to storytelling. Less arguing about faces, more time to decide how a Kryptonian recording fails.

Some of the smallest details were accidents worth keeping. When the team sliced the splats to create those unstable packets, the ellipsoids broke with ragged edges that danced. It was not a crisp scan line, and the irregularity felt right, almost like the error was physical inside the volume. They kept it. There is also a practical grace note in the way the messages populate the Fortress. Multiple instances play in different positions at once. With a mesh projection system, you would lock yourself into all sorts of constraints. Here, you lay them out, line the cameras up, and go.

This might be the first time dynamic Gaussian splats have anchored a feature film sequence, at least at this scale. Static radiance fields have begun to show up in television and short films. A moving, speaking performance that survives editorial and close-ups is another level. Sears is already looking ahead. Performances, environments, and time, all trained into volumetric memory that anyone can stage after the fact. Someone will hang this in a multi-viewer light field to make a true hologram demo. It is closer than it sounds.

What I love about the Superman sequence is how quietly it lands. The effect does not yell. It looks so clean that you’d be forgiven for thinking it's plain 2D. Superman learns who he is through a recording that feels real because it is, captured from all angles at once, then gently guided through an emerging pipeline that never trips over itself. Technology serves story, not the other way around. 

Framestore, Infinite Realities, and the client side team took a risk on rapidly moving radiance fields and turned it into something audiences simply accept. That is the bar. Build the future, then make it look obvious.

As Superman himself states, the recordings are his most cherished possession. In a film filled with large action sequences and complicated effects, it was for me too.