

Michael Rubloff
May 20, 2025
At Google I/O 2025, the company unveiled Google Beam, an advanced 3D video communication platform that transforms traditional video calls into immersive, lifelike experiences. Formerly known as Project Starline, Beam utilizes a combination of AI-driven volumetric video, light field displays, and spatial audio to render participants in real-time 3D, all without the need for headsets or special glasses. This innovation aims to bridge the gap between virtual and in-person interactions, offering a more natural and engaging communication medium.
Google Beam is the culmination of years of research and development that began with Project Starline. The system employs a six-camera array to capture depth and motion, which is then processed by custom AI models via Google Cloud. The result is a photorealistic 3D representation of the person on the other end of the call, displayed on a specialized light field screen.
This setup allows for natural eye contact, nuanced facial expressions, and spatial audio cues, making conversations feel more authentic and reducing the fatigue often associated with standard video conferencing. While Starline has not officially used any radiance field pipelines in their work, there are current job openings on Google's website for this product that are looking for people with radiance field familiarity.
Google has partnered with HP to produce the initial hardware for Beam, aiming to make the technology accessible to a broader range of users. Early adopters include companies like Salesforce, Deloitte, and Duolingo, which plan to implement Beam in their offices later this year. The system is designed to integrate with existing video conferencing platforms, such as Google Meet and Zoom, facilitating adoption without overhauling current communication infrastructures.
Beam's real-time 3D rendering capabilities align with the goals of radiance field technologies, such as Neural Radiance Fields (NeRFs) and Gaussian Splatting, which aim to create realistic 3D representations from 2D images. The advancements in AI-driven volumetric video processing demonstrated by Beam could inform future developments in radiance field methodologies, potentially leading to more efficient and accurate 3D modeling techniques.
I actually received a demo of Project Starline at SIGGRAPH last year and their system truly does make you feel like you are in the room with another person. It's exciting to see a release window for later this year for lifelike 3D to make its way out to the public and an exciting announcement from Google I/O. Now, where are the Immersive View updates!
Learn more about Beam, here.