Interview

Interview with Lolo2K

Michael Rubloff

Michael Rubloff

Feb 18, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Lolo2K
Lolo2K

In an era where technology and art increasingly intertwine, Lolo2K showcases a blending of the realms of documentary filmmaking with the innovative potential of Neural Radiance Fields (NeRFs) and Gaussian Splatting to create experiences and connect viewers closer to the captured moments.

Rooted in documentary filmmaking and passionate about capturing the moment's essence, Lolo2K ventures beyond traditional boundaries, embracing artistic expression's future through cutting-edge technology. This interview delves into the heart of Lolo2K's journey, exploring the fascinating realm of Neural Radiance Fields (NeRFs) and Gaussian Splatting (GS) models, and their transformative impact on art and storytelling.

By meticulously crafting radiance fields, Lolo2K opens a world where art is not merely observed but experienced, allowing viewers to explore creations in detail and at their own pace.

From self-taught beginnings in the pandemic's solitude to boundary-pushing collaborative projects, Lolo2K's journey showcases how technology amplifies artistic expression.

Whether capturing the ephemeral beauty of a secret garden in Amsterdam or weaving together unrelated spaces into a cohesive, immersive experience, Lolo2K's projects offer a glimpse into the future of how we perceive, interact with, and ultimately understand art.

As we delve into Lolo2K's insights, we invite you to discover how radiance fields enhance the artistic landscape and pave the way for innovative storytelling and expression.

For those unfamiliar, could you explain what radiance fields are and how they contribute to the uniqueness of your art?

As a documentary filmmaker I try to be as objective as possible. In the past I used video and 360-degree video to record stories and stay as unframed as possible. 360-VR helped a great deal. I wasn't directing or framing the viewers gaze. Deciding only where to place the camera and when turning on the recording. NeRFs and GS are an upgrade to this principle of unbiased recording because they remove the spatial and temporal constraints of the visual recording. Whereas traditional video forces the duration and position of a recording onto the viewer, within a 3D recording the viewer can decide where they want to stay and linger without a set duration dictated by the runtime of the experience.

When the viewer was forced to be in a place for a certain amount of time because I placed the camera for certain amounts of time. The viewer can move around freely in a 3D scan, staying at a desired location for as long as they want.

How did you get started with NeRFs/Radiance Fields?

I started learning photogrammetry via YouTube tutorials during the corona pandemic. Soon after the pandemic the A.I. revolution started and neural radiance field papers started popping up. Again via youtube I learned from tutorials provided by early adapters. And seeing as the recording process of photogrammetry datasets share many similarities I was able to get started making NeRF quite quickly. Yet it took me a while to get into the CLI knowledge needed to install and calculate camera positions. I procrastinated these lessons because I had no experience with CLI and felt overwhelmed in the beginning. As usual it turned out I procrastinated longer than I needed to figure out what I needed to know.

What inspires your creations, and how do you decide on the themes or subjects for your projects?

My father used to make movies about iconic figures, stories and places that have later become of cultural significance. So I like to find people and places that have a similar impact now, but more importantly, at a later point in time. As an example, the secret garden in Amsterdam will be torn down at the end of this year. So to record this spectacular collection of street art is like recording a landmark. It might be only significant for a few people but since the time and effort to create a recording is reduced, the barrier to capture local artists is too. 

What was the inspiration behind your experience project, Contrast Features?

NeRFs are better at recording fine detail and lighting than photogrammetry. So when I started creating radiance fields I saw an opportunity to render artworks that previously rendered unsatisfactory. Photogrammetry ends up as hard polygons that stiffens areas that are meant to be soft and can’t capture all the subtleties to truly represent reality. And to represent an artwork in VR it’s digital twin should be as close to the real thing as possible. But while NeRFs are more photorealistic they are also very abstract and colorful when the camera goes out of bounds. So I was able to tie several unrelated places and artwork together using the extrapolations of the neighboring models. I made several recordings of works made by people that I know personally. So to ask them for their consent to create a creative VR experience was quite easy. I should also mention the developers at nerfstudio. They were kind enough to add support for equirectangular video rendering.

In your most recent work, Wayne "Man of the Forest" Horse, you showcase the flexibility of directing the camera through space. How does having total autonomy over camera movements, after you’ve finished shooting help as a creative? 

It helps to create a smooth preview while I'm figuring out how, where and when I can present the same digital space using a VR headset. My end goal is to transport people to the recorded location so they can direct the experience themselves and not be forced to move the way I have decided in a video. That’s just my perspective but for others these video fly-throughs create an opportunity to present their work in a supernatural way. Passing through walls like a ghost, creeping through their world and gliding from frame to frame showing off interesting elements of their world.

The locations you’ve captured are all so vibrant and have so many small details to stop and explore in. How do you come across these spaces and do you ever see something that immediately jumps out to you? 

I’m lucky to be born and raised in the greater Amsterdam area so I've been surrounded by interesting people and places for a long time. Sometimes I ask whether I can make a recording of a certain object or space, other times I'm asked to help recording for promotional or other reasons like the need to document the final work to the grand it was financially supported by. People and their work stand out to me when the experience of standing and watching gives off a strong vibe. When the work reminds me of another time and space. Or when it’s so unique that I feel I want to be connected to the work and its creator for a longer than average amount of time.

In the same way I’m enjoying nature and it’s impressive grip on my attention span. Sometimes you just never want to leave a place. In your mind you will never forget that moment. But with 3D scans you can share this enthusiasm using a powerful visualization tool. Dragging your audience into much of the same atmosphere and visuals.

How have radiance fields enabled you to tell stories in new ways?

I always hated not having better stories to tell. I graduated as a documentary filmmaker but I didn’t feel compelled to do another serious project for over a decade. My graduation video was about a painter and his reasons for painting and creating sculptures. Now that I'm able to create 3D environments and digital twins, I'm able to start with an objective observation and work myself up to telling the right story about the artist. But only if it's there and wants to be told. Otherwise the visual story is sufficient to showcase the art, and part of the artist. Moreover, the 3D scan can become a hub of information.

VR is a very interactive place where a plethora of media can be attached to objects and locations. There is an infinite amount of room if you need it, or a dense collection of mixed media. You can toss around some sketches on a table, have music play in certain areas or create a VR world within the VR world. This nonlinear, adventurous multi-media mix can grow organically and layer as deep as we feel is needed.

How do you balance technical challenges with artistic expression when working with radiance fields?

I try to make a solid and sharp model but that can be a challenge. I usually need to record a large space without a central object to focus on. So I tend to grab my 360-degree camera even though I know it isn’t my sharpest lens. I’m still learning a lot and making many errors but since this technique is relatively new and I work with people that experiment a lot as well I'm usually excused when there are blemishes in the images I create. But I learn by doing and have been able to achieve many things while still feeling like a novice. Leaning into every opportunity I come across and asking for help on discord has helped me grow faster than I thought possible. 

What are some developments in the field of radiance fields or related technologies that excite you?

The open, sharing and selfless attitude and energy of people working within this space truly amazes me. This tech is worth so much to me as a creator because it opens up high quality content creation for smaller outfits and production companies. It’s amazing how quickly we went from rendering NeRF that took 36-hours to 3D Gaussian splats in only 36 minutes. And from open source platforms like nerfstudio to train and render, to Gracia that streams models to VR headsets and makes the models so much more tangible. When I give a demo I can’t go without a nice clip made in nerfstudio and handing over my Quest 3 running a Splat model in VR. 

Similarly, how do you see the future of imaging technology developing?

I think we can start showing a lot of cool places we enjoy in a more compelling way. VR has always had a wow factor but we need more reasons for adoption for it to become a household device. A large amount of cheap yet well rendered 3D environments that teleport you to amazing far off places and provide the viewer with a nonlinear mixed media experience will hopefully prop up a content library that provides enough reason to purchase the headset. It will help me create many impressions that showcase other people's works and views on the world.

You can find out more about Lolo2k and follow his work via Instagram.

Featured

Featured

Featured

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff