Interview with Paul Debevec: Pioneering Light Field Research and Navigating Technological Shifts

Katrin Schmid

Katrin Schmid

Jan 3, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Paul Debevec
Paul Debevec

We usually focus on providing the most up-to-date news on radiance fields. Today we are beyond thrilled to offer a dive into light fields by speaking to pioneering light field researcher and featured speaker at SIGGRAPH Asia 2023, Paul Debevec

Paul Debevec

A big thank you to him and to the SIGGRAPH Asia press team for making this insightful conversation possible.

Q: Mr. Debevec, your early work on light fields, especially in view-dependent texture mapping from your 1996 paper, was a significant step for the research field. 

Debevec: By SIGGRAPH 1995 in Los Angeles, I was already working on my PhD on photogrammetry. Part of my technique was that I had lots of photos of a building from different angles. And that I was going to cross dissolve the projection of this photo of the building to this photo based on where your viewpoint was. Which I ended up calling “View dependent texture mapping”, but this approach unveiled a specialized instance of sparse light fields. I eventually presented my paper in the “image-based modeling and rendering” session at the SIGGRAPH conference. 

Before that I talked to Marc [Levoy from Stanford University] at a party and he said, hmm, that sounds like we might be submitting similar papers to SIGGRAPH. Marc was working on a paper titled "Light Field Rendering" with Pat Hanrahan, and it turned out to become the most cited paper on Lightfields. 

Interestingly, Microsoft Research presented “Lumograph” at the same session—a concept comparable to the light field but projecting onto geometry, marking it as more advanced. The naming nuances, as I jokingly mentioned, added a touch of carnival whimsy to the discourse.

Q: The rapid evolution of the field is undoubtedly a challenge. How have you managed to keep up with the fast-paced advancements?

Debevec: SIGGRAPH Asia had 900 papers submitted. They were expecting half that and they got 900. That's how accelerated the field is. Navigating the ever evolving landscape of computer graphics is indeed challenging. Participating in events like SIGGRAPH Asia is my way of immersing myself in the latest developments, engaging with fellow enthusiasts, and gaining insights that help me stay informed and inspired. 

Q: Shifting gears to virtual production, could you elaborate on how your early work in photogrammetry, high dynamic range imaging, HDRI lighting, and light stage work found practical applications in the filmmaking process?

Debevec: Our approach went beyond paper publications; we sought to create practical applications. Utilizing photogrammetry, HDRI lighting, and light stages in our movies brought a tangible and cinematic dimension to our research. It was about translating theoretical advancements into real-world filmmaking techniques.

So a win for virtual production can be simple as in the Star Trek TV series where they show the star fields whizzing by outside the windows. It's super fun. It looks good, actors love it, you can even see it in reflections and all that it is really there but that is a very simple scene to render: it is like points of light that are streaking over. 

Q: While there is significant interest, radiance field technology has not been used in feature films or virtual production yet.

Debevec: I am not sure if it has been featured in a show yet, but there is already a company with software that can render a NeRF on your LED wall. It seems like not everyone is comfortable with the risks of new technology in the industry.  Most NeRFs, as I have observed, are captured by researchers and showcased on platforms like YouTube or in papers, often on a small scale. Virtual production, in my view, doesn't make sense below 24 frames at 4k resolution, considering the industry standard set by streaming services.

While NeRFs theoretically handle any scene complexity, they rarely hit the 4k resolution mark. When it comes to demand for high-resolution NeRFs, I don't see it within the research community because once they reach 1k, they are content. I have come across numerous papers today, and it seems that 256 x 256 pixels are deemed sufficient for facial details.

Personally, I'd also like things to settle down a bit. At this point it is very hard to predict: everything's all over the place. Because like, you know, it was all NeRFs, NeRFs, NeRFs and then for the last couple months it has been all Gaussian splats. I do not think either is the final word. The film industry might be cautious about embracing such innovations due to the complexity of workflows and the necessity for reliability. You need to be 99% sure it is going to work. Will we use NeRFs and Gaussians? I think yes, probably, but not sure how fast that's going to happen. 

Q: You also worked on several 3D teleconferencing solutions over your career, do you see a future where we have those at home?

Debevec: I believe achieving engaging video calls where the other person appears three-dimensional on the screen is entirely feasible. My 2009 experiment, despite its impractical and somewhat risky rotating display, offered a captivating exploration into the realm of 3D face interactions.

Photos: The mirror used in Teleconferencing System 2009 and resulting images 

In comparison, technologies like Google's Starline, while advanced, present their own set of limitations. They are primarily tailored for one-to-one interactions and at a higher cost. Our setup from 2009 allowed for multi-person interactions and delivered a unique and distinctive experience.

This year NVIDIA, including one of my former graduate students, Koki Nagano, embarked on a project featured at SIGGRAPH for E-Tech. It aims to be a low-cost alternative to Starline. However, the challenge lies in developing custom hardware, as convincing people to invest in additional hardware can be a significant hurdle. In their approach, they attempted to generate a 3D image of the person using a single webcam and employed neural networks to extrapolate depth information, subsequently using it for the 3D model.

Q: How do you see these alternative scene representations shaping the future of computer graphics?

Debevec: There is the tyranny of the texture mapped polygon that we've had since the first rasterization engines back in the 80ies. Silicon graphics machines did this. Now every video card can render a billion triangles a second. But if you just look around the world, it is not made out of triangles. So ironically, there is a bunch of triangles on that wall here. 

But the world is a bunch of atoms that interact with photons, right? Molecules that interact with photons. And there is too many of them to actually represent every molecule and every photon. 

So computer graphics has always had to make approximations. The question is, what approximation are you going to make? And texture mapped triangle meshes, it is one way to go. There is been years of investment to try to make polygons efficient. 

With global light transport you can get the results that we saw in Blade Runner 2049, Furious 7 or Avatar and these are very appealing and people like them. So that's all there.

But at the same time, we have a lot of investment in fossil fuels, getting them out of the ground and putting them into our cars and driving around. And because there is so much investment in that, it has made it harder to go and change over to electric vehicles because a whole new infrastructure is required and it upsets the existing marketplaces and even some of the politics for that. We probably need stuff to be de-risked.

There are a lot of things that will probably have opportunities to transform. But one of the biggest questions is, like, how do we represent a scene? How do we represent a three-dimensional object or environment?

Q: Finally, any advice for students aspiring to make a meaningful impact in computer graphics?

Debevec: To the aspiring minds in computer graphics: attend conferences like SIGGRAPH to find your community. So my first SIGGRAPH was 1994 and I have not missed any year since. 

Explore diverse facets of the field, and identify areas that truly resonate with your passion. Look for new movements, think innovatively, and contribute ideas that have the potential to shape the trajectory of technology. It is the synergy of fresh perspectives that propels our field forward. 

I am hoping we attract people to the field that can go look at all this and say: yeah, it is pretty good. But I can do better. I can think of something new and we get to see their paper too. 

(Note: The interview format is a condensed representation for clarity.)

Links

Featured

Recents

Featured

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Dec 20, 2024

Platforms

GSOPs 2.0: Now Commercially Viable with Houdini Commercial License

The 2.0 release for GSOPs is here with a commercial license!

Michael Rubloff

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Dec 18, 2024

Platforms

Odyssey Announces Generative World Model, Explorer

Odyssey shows off their photo real world generator, powered by Radiance Fields.

Michael Rubloff

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Dec 13, 2024

Platforms

PICO Splat for Unreal Engine Plugin

The Unreal Engine plugin for Pico headsets has been released in beta.

Michael Rubloff

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Dec 10, 2024

Research

HLOC + GLOMAP Repo

A GitHub repo from Pablo Vela has integrated GLOMAP with HLOC.

Michael Rubloff

Katrin Schmid

Written by Katrin Schmid

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

More from Katrin Schmid

More from Katrin Schmid

News

Meta's VR-NeRF at SIGGRAPH Asia 2023

We continue our SIGGRAPH Asia series chatting with Meta's Codec Avatars Lab team about their groundbreaking paper: "VR-NeRF: High-Fidelity Virtualized Walkable Spaces''.

Katrin Schmid

Jan 19, 2024

VR-NeRF

News

Meta's VR-NeRF at SIGGRAPH Asia 2023

Katrin Schmid

Jan 19, 2024

VR-NeRF

News

Meta's VR-NeRF at SIGGRAPH Asia 2023

We continue our SIGGRAPH Asia series chatting with Meta's Codec Avatars Lab team about their groundbreaking paper: "VR-NeRF: High-Fidelity Virtualized Walkable Spaces''.

Katrin Schmid

Jan 19, 2024

VR-NeRF

Interview

NVIDIA's Adaptive Shells at SIGGRAPH Asia 2023

Today in our SIGGRAPH Asia series we host best paper award winning NVIDIA team and discuss their latest work on a hybrid NeRF named “Adaptive Shells”. We have Zian Wang, Merlin Nimier-David, Tianchang Shen, Nicholas Sharp, Jun Gao and Zan Gojcic with us. 

Katrin Schmid

Jan 17, 2024

Adaptive Shells

Interview

NVIDIA's Adaptive Shells at SIGGRAPH Asia 2023

Katrin Schmid

Jan 17, 2024

Adaptive Shells

Interview

NVIDIA's Adaptive Shells at SIGGRAPH Asia 2023

Today in our SIGGRAPH Asia series we host best paper award winning NVIDIA team and discuss their latest work on a hybrid NeRF named “Adaptive Shells”. We have Zian Wang, Merlin Nimier-David, Tianchang Shen, Nicholas Sharp, Jun Gao and Zan Gojcic with us. 

Katrin Schmid

Jan 17, 2024

Adaptive Shells

Guest Article

Interview with Paul Debevec: Pioneering Light Field Research and Navigating Technological Shifts

We usually focus on providing the most up-to-date news on radiance fields. Today we are beyond thrilled to offer a dive into light fields by speaking to pioneering light field researcher and featured speaker at SIGGRAPH Asia 2023, Paul Debevec. 

Katrin Schmid

Jan 3, 2024

Paul Debevec

Guest Article

Interview with Paul Debevec: Pioneering Light Field Research and Navigating Technological Shifts

Katrin Schmid

Jan 3, 2024

Paul Debevec

Guest Article

Interview with Paul Debevec: Pioneering Light Field Research and Navigating Technological Shifts

We usually focus on providing the most up-to-date news on radiance fields. Today we are beyond thrilled to offer a dive into light fields by speaking to pioneering light field researcher and featured speaker at SIGGRAPH Asia 2023, Paul Debevec. 

Katrin Schmid

Jan 3, 2024

Paul Debevec