Interview

Gaussian Splatting Brings Art Exhibitions Online with Yulei

Michael Rubloff

Feb 22, 2024

Yulei
Yulei

The advent of radiance fields represents a transformative leap in event photography, aiming to give people the feeling of attending an event asynchronously. Artist Yulei's recent demonstration serves as a compelling example of this technological progression. Through the innovative application of Gaussian Splatting, Yulei has enabled a digital walkthrough experience of an exhibition at Laboratorio 31 Art Gallery in Bergamo, Italy, accessible via the web.

This technological advancement allows individuals to explore a captured moment in time with complete autonomy, mirroring the experience of physically navigating through the space. Unlike traditional event photography, which limits the audience to a photographer’s perspective, radiance fields and Gaussian Splatting provide a multi-dimensional and interactive experience. Viewers can choose their path, focus on details of interest, and engage with the exhibition in a manner that closely replicates the physical experience of attendance.

These advances in digital capture and presentation technologies are democratizing access to events and exhibitions, eliminating barriers posed by geographic location and time. They mark a significant milestone towards achieving the ultimate goal of event photographers: making every event universally accessible, fostering a global sense of presence and participation.

I spoke to Yulei about the exhibition, how it came together, and plans for the future. Please enjoy!

How did you get started with 3D Gaussian Splatting/Radiance Fields?

In 2022, while working on a VR project, I became intrigued by some Twitter posts from Luma AI. The transition from photo capture to video was mesmerizing and truly mind-blowing. During that time, the COVID outbreak led to a lockdown in Shanghai. Levant Art Gallery Shanghai reached out to a project manager, inquiring whether it was feasible to create a VR exhibition online, replacing the offline show. In Italy, I dedicated nearly seven months to 3D modeling, texture building, and web application programming for this project.

During the same period, I delved into thorough research on Radiance Fields, encompassing the study of the initial NeRF paper from UC Berkeley and subsequent advancements over the following two years. This exploration sparked my inspiration to leverage Radiance Fields for showcasing 3D scenes online, all while preserving the essence of the physical exhibition.

In 2023, a paper and accompanying source code on 3D Gaussian Splatting were published. This work showcased both high-performance rendering and excellent image quality. At that point, I believed it was the opportune moment to integrate it into real-world projects.

What inspires your creations, and how do you decide on the themes or subjects for your projects?

As a full stack developer based in Italy, my background encompasses both art and design, as well as programming. Starting from traditional fine art to graphic design, I began teaching myself programming in the 2010s. My projects are heavily driven by my interests, involving continuous exploration of the latest technologies while striving to keep the results as simple as possible.

How did the original idea of capturing the gallery with a radiance field come up?

Prior to capturing the gallery scene, I had already conducted numerous radiance field captures and training sessions, often collaborating with Luma AI and nerfstudio. Initially, the Gaussian Splatting method was not available with Luma AI and nerfstudio. The subjects of my captures included bicycles, garages, courtyards, and more. Drawing from my experience with photogrammetry, I decided to test how to achieve the best results using radiance fields.

In the latter half of 2023, I presented some results of radiance field technology to Laboratorio 31 Art Gallery, which is based in Bergamo. I inquired about the possibility of capturing an exhibition and publishing it online. Fortunately, I was granted the opportunity to do so.

How did it all come together? 

I have been utilizing React Three Fiber extensively across many of my projects. By combining React.js and Three.js, React Three Fiber offers a straightforward approach for building 3D web applications. Its ease of use has been instrumental in streamlining my development process.

At the outset of this project, React Three Fiber did not support Gaussian Splatting rendering. Therefore, I opted to use antimatter15's splat for WebGL rendering. However, given React Three Fiber's flexibility in writing new components, I found it feasible to integrate this solution. A few months later, Drei, which provides helpers for react-three-fiber, officially supported splat. Consequently, I made the transition to using it.

For the interactive part of this project, like navigation and player control, I struggled for a while. In my last 3D project, I used the Cannon physics library. However, for this gallery project, I wanted to keep things simple and extendable. So, despite the initial struggle, I made the decision to switch to a 2D physics library, which is matter.js. It was my first time using matter.js. With the assistance of GitHub Copilot Chat, which provided guidance on the simple structure needed for this project, I quickly became familiar with it.

For the remaining aspects, such as the popup of pages and displaying works information, I utilized daisyUI, a component library for Tailwind CSS.

From what I understand, you didn’t have a ton of time to capture the gallery. How did you work through that challenge? 

With my experience in photogrammetry, I understand the crucial role of timing, especially when it comes to Gaussian Splatting training. During my study with Gaussian Splatting, I learned that the camera's position is highly significant. Any changes in the scene, such as people moving, shifts in sunlight, or even my own reflection on glass surfaces, can introduce noise or errors. This can manifest in the results as floating clouds or distorted structures.

As mentioned earlier, I have conducted numerous scene captures using various devices, including smartphones, DSLR cameras, and Mirrorless Digital Cameras. Devices like smartphones tend to be "too smart," automatically enhancing each capture for the best result. However, this optimization, which sharpens edges, adjusts exposure, and optimizes colors, can introduce noise or errors, which is not ideal for Gaussian Splatting training.

To address these challenges, I opted to use a Mirrorless Digital Camera, specifically the Sony 6100, for the capture process. For this particular project, I captured a total of 525 images. After carefully measuring the exposure, white balance, and shutter speed, I standardized these settings across all images. This approach ensured consistency and minimized variations between each image, resulting in a more uniform dataset for Gaussian Splatting training.

Another benefit of fixing the camera settings was the expedited capture process. With the camera mounted on a tripod and linked to a remote control, the only task required was positioning the camera as per the predetermined plan and pressing the button. This streamlined approach significantly accelerated the entire capture process.

Ok, so you got the footage and it came out super well. Now you had another challenge of hosting it online and making it interactive. How did you do it! 

Since this project primarily presents 3D spaces, works, and exhibition information, I categorized it as a static website or static web application. For hosting static web applications, the optimal approach is utilizing cloud storage solutions such as AWS S3, which I have previously used. Additionally, to enhance global speed and performance, Cloudflare can be employed effectively in conjunction with AWS S3.

How does Gaussian Splatting lend itself well to showing events or moments in time? How does that freedom for the end user to explore a scene affect the way you approach a capture?

Gaussian Splatting is particularly suited for capturing events or moments in time due to its ability to represent 3D scenes with high fidelity and detail. This technique allows for the creation of immersive visualizations that can effectively convey the atmosphere and dynamics of a specific event or moment.

The traditional method of 3D capture, known as photogrammetry, typically produces results in the form of a mesh representation. While photogrammetry and radiance fields share a similar workflow, the results from radiance fields are particularly well-suited for rendering scenes with intricate lighting effects and global illumination.

In the context of this project, one of the challenges faced during the capture process is the limited space within the gallery. Additionally, the presence of white walls that fill the space poses difficulties for capture and camera alignment. These factors complicate the process of obtaining comprehensive and accurate data for the 3D scene reconstruction.

Fulfilling the end user's ability to explore the entire scene is best achieved by capturing as many photos as possible. However, this approach presents several challenges. Firstly, capturing a large number of images requires more time, which may not always be feasible. Additionally, environmental changes occur over time, meaning the conditions during the last capture may differ significantly from those during the first capture. These changes can include shifts in sunlight, changes in the surroundings such as cars parking and leaving on the street, and other dynamic factors.

Furthermore, an excessive number of images can pose issues for Gaussian Splatting training. Too many images may introduce noise and complexity to the training process, potentially affecting the quality of the final result. Therefore, striking a balance between capturing a sufficient number of images to enable comprehensive exploration and avoiding an excessive amount that may hinder training is essential. This involves careful planning, timing, and consideration of the trade-offs involved in the capture process.

To maintain a reasonable number of captures while ensuring comprehensive coverage, a targeted approach was adopted. The focus was on capturing photos only around the height of eye level, dividing the scene into three rows.

The first row consisted of photos taken straight forward at the height of eye level. The second row involved tilting the camera upwards towards the ceiling, while the third row entailed tilting downwards towards the floor. Each row was captured with a slightly varied camera height, aiding in COLMAP camera alignment.

By organizing the capture process in this manner, sufficient coverage of the scene was achieved while minimizing the number of photos required. This approach facilitated more efficient data processing and reduced the potential for noise in the final result.

What’s the reception been to people interacting with your site?

Many individuals showed enthusiasm for the project. The link and a video screen recording of the web application were shared on platforms like Twitter, nerfstudio Discord, and Reddit. Within just two days, the website received over a thousand visits. Notably, the creator of React Three Fiber, @0xca0a, quoted my Twitter post, leading to a significant increase in visitors. This drew attention from individuals in related fields such as radiance fields, Three.js, VR, etc., who engaged more actively with the project. Many of them expressed interest in learning more about the technical details of the project. A promise was made to the community, and I'm committed to writing a blog post providing more detailed information about the project.

Indeed, the observation of lower engagement from individuals in the art and gallery sectors could be attributed to the limited exposure of the project in those specific areas. Improving marketing efforts in the future may help increase visibility and engagement among these audiences.

Is there a project that you’ve always wanted to try that radiance fields would enable you to pursue?

This marks the completion of my first real-world project utilizing radiance fields, specifically Gaussian Splatting. I am particularly drawn to focusing on specific areas and exploring more use cases. Typically, I am driven by projects. Real-world cases help me understand actual needs, and feedback obtained enables me to make better decisions for future endeavors.

In the future, there are several endeavors I may pursue with radiance fields. Firstly, I aim to develop improved tools to assist individuals in generating 3D websites more effectively. Additionally, I plan to further develop the gallery project, aiming to increase the fluency of this new 3D technique and expose more people to it.

Yulei's capture of the Gallery can be found here and can be found on Twitter.


Featured

Featured

Featured

Research

Shrinking 3DGS File Size

Gaussian Splatting has quickly become one of the most exciting research topics in Radiance Fields, thanks to its fast training, real time rendering rates, and easy to create pipeline. The one critique that emerged was the resulting file size from captures, often venturing into the high hundreds of megabytes and up.

Michael Rubloff

Apr 11, 2024

Research

Shrinking 3DGS File Size

Gaussian Splatting has quickly become one of the most exciting research topics in Radiance Fields, thanks to its fast training, real time rendering rates, and easy to create pipeline. The one critique that emerged was the resulting file size from captures, often venturing into the high hundreds of megabytes and up.

Michael Rubloff

Apr 11, 2024

Research

Shrinking 3DGS File Size

Gaussian Splatting has quickly become one of the most exciting research topics in Radiance Fields, thanks to its fast training, real time rendering rates, and easy to create pipeline. The one critique that emerged was the resulting file size from captures, often venturing into the high hundreds of megabytes and up.

Michael Rubloff

4/11/24

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

4/10/24

Research

PhysAvatar's Dynamic Dances

Playing as yourself in a video game has always seemed like a fun idea. Now, we're one step closer to making that a reality with PhysAvatar.

Michael Rubloff

Apr 9, 2024

Research

PhysAvatar's Dynamic Dances

Playing as yourself in a video game has always seemed like a fun idea. Now, we're one step closer to making that a reality with PhysAvatar.

Michael Rubloff

Apr 9, 2024

Research

PhysAvatar's Dynamic Dances

Playing as yourself in a video game has always seemed like a fun idea. Now, we're one step closer to making that a reality with PhysAvatar.

Michael Rubloff

4/9/24

Research

RealmDreamer's Generative Scenes

Since the unveiling of the Sora's large-scale generative Radiance Fields, the tech world has been buzzing with anticipation about the future of 3D scene generation. There hasn't been much public work since then showcasing what could be coming, but today we're looking at RealmDreamer, which creates scene level generations based on original text prompts.

Michael Rubloff

Apr 11, 2024

Research

RealmDreamer's Generative Scenes

Since the unveiling of the Sora's large-scale generative Radiance Fields, the tech world has been buzzing with anticipation about the future of 3D scene generation. There hasn't been much public work since then showcasing what could be coming, but today we're looking at RealmDreamer, which creates scene level generations based on original text prompts.

Michael Rubloff

Apr 11, 2024

Research

RealmDreamer's Generative Scenes

Since the unveiling of the Sora's large-scale generative Radiance Fields, the tech world has been buzzing with anticipation about the future of 3D scene generation. There hasn't been much public work since then showcasing what could be coming, but today we're looking at RealmDreamer, which creates scene level generations based on original text prompts.

Michael Rubloff

4/11/24

Trending articles

Trending articles

Trending articles

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Research

RadSplat's Hybrid NeRFs and 3DGS

Excitingly, we're seeing the arrival of the first result of another widely hyped event. The meeting of NeRFs and Gaussian Splatting.

Michael Rubloff

Mar 21, 2024

Research

RadSplat's Hybrid NeRFs and 3DGS

Excitingly, we're seeing the arrival of the first result of another widely hyped event. The meeting of NeRFs and Gaussian Splatting.

Michael Rubloff

Mar 21, 2024

Research

RadSplat's Hybrid NeRFs and 3DGS

Excitingly, we're seeing the arrival of the first result of another widely hyped event. The meeting of NeRFs and Gaussian Splatting.

Michael Rubloff

Mar 21, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Interview

Gaussian Splatting Brings Art Exhibitions Online with Yulei

The advent of radiance fields represents a transformative leap in event photography, aiming to give people the feeling of attending an event asynchronously. Artist Yulei's recent demonstration serves as a compelling example of this technological progression.

Michael Rubloff

Feb 22, 2024

Interview

Gaussian Splatting Brings Art Exhibitions Online with Yulei

The advent of radiance fields represents a transformative leap in event photography, aiming to give people the feeling of attending an event asynchronously. Artist Yulei's recent demonstration serves as a compelling example of this technological progression.

Michael Rubloff

Feb 22, 2024

Interview

Gaussian Splatting Brings Art Exhibitions Online with Yulei

The advent of radiance fields represents a transformative leap in event photography, aiming to give people the feeling of attending an event asynchronously. Artist Yulei's recent demonstration serves as a compelling example of this technological progression.

Michael Rubloff

Feb 22, 2024

Recent articles

Recent articles

Research

Shrinking 3DGS File Size

Gaussian Splatting has quickly become one of the most exciting research topics in Radiance Fields, thanks to its fast training, real time rendering rates, and easy to create pipeline. The one critique that emerged was the resulting file size from captures, often venturing into the high hundreds of megabytes and up.

Michael Rubloff

Apr 11, 2024

3dgs compress

Research

Shrinking 3DGS File Size

Gaussian Splatting has quickly become one of the most exciting research topics in Radiance Fields, thanks to its fast training, real time rendering rates, and easy to create pipeline. The one critique that emerged was the resulting file size from captures, often venturing into the high hundreds of megabytes and up.

Michael Rubloff

Apr 11, 2024

3dgs compress

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Platforms

Luma AI Android Released

Native Android support from Luma AI is finally here. Of all the questions about Luma features I get, Android support is routinely at the top of the list.

Michael Rubloff

Apr 10, 2024

Research

PhysAvatar's Dynamic Dances

Playing as yourself in a video game has always seemed like a fun idea. Now, we're one step closer to making that a reality with PhysAvatar.

Michael Rubloff

Apr 9, 2024

Research

PhysAvatar's Dynamic Dances

Playing as yourself in a video game has always seemed like a fun idea. Now, we're one step closer to making that a reality with PhysAvatar.

Michael Rubloff

Apr 9, 2024