News

Professor Kanazawa Lectures about NeRFs at Stanford HAI

Michael Rubloff

Michael Rubloff

Nov 7, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
Professor Kanazawa
Professor Kanazawa

On October 24th, Stanford University featured a series of speakers for their event: New Horizons in Generative AI: Science, Creativity, and Society. As part of that featured group of speakers, was Professor Angjoo Kanazawa.

Professor Kanazawa leads the Kanazawa AI Research (KAIR) Lab at Berkeley and is the teacher of several prominent members of NeRF Engineers including Alex Yu (CTO of Luma), Ethan Weber (Co-Founder nerfstudio), Justin Kerr (LERF, Nerfacto-Huge), and Brent Yi (nerfstudio). She was also the professor of Matt Tancik, one of the original authors of NeRF.

She's the Yoda of NeRF and serves as an advisor to Luma AI.

Professor Kanazawa opened her talk by exploring the future of memories. She explained how emergent technologies, particularly radiance fields, could enable us to revisit and retell our stories. Professor Kanazawa's invocation of iconic cultural touchstones like Ready Player One, the Holodeck, and Harry Potter was not merely nostalgic; it underscored the trajectory from fiction to tangible reality. These stories have long ignited the imagination of both the public and researchers, picturing worlds where the boundary between the real and the unreal is seamless and interactive. It is part of this vision that has propelled the explosive research and development in the field of NeRFs, leading to a prolific output of academic papers.

However, this rapid proliferation has also brought to light a challenge— the massive total number of NeRF papers has led to uncentralized repositories and siloing of ideas. This situation spurred the conception of nerfstudio in March 2022. Just a few months later, nerfstudio was released, with an ambitious goal at its core: to simplify the process to Use, Develop, and Learn about NeRFs. In many ways, nerfstudio is poised to become the Hugging Face for radiance fields, centralizing NeRF research and development in a way that's accessible and community-driven.

This timeline of development underscores the speed at which NeRFs and radiance fields are advancing. While there are literally hundreds of papers interspersed throughout this timeline, the progression is undeniable. It's been less than four years since the original NeRF paper was released, and the strides made since then are remarkable.

The concept of a future where visual media transcend two dimensions and enter the realm of 3D is as thrilling as it is complex, marking a significant departure from traditional photography. For a sizable portion of our population, photography felt like something that would never evolve. But it has. We can no longer ignore the progression that the technology has made and need to begin embracing the photorealistic 3D environments, which are native to our everyday life.

As part of the talk, Professor Kanazawa also teases the Gaussian Splatting pipeline that is coming to nerfstudio shortly. Remember, nerfstudio unveiled their 3DGS method, gsplat, a few weeks back. Hopefully it won't be much longer until everyone has access to it.

In addition to that, she also touches on the viewer, which is currently in the process of being revamped. As a bit of a spoiler, this viewer will be coming to Mobile(!), have Shareable Links (!!), and have a Viser Integration.

Being able to share your trained NeRFs and Splats will be an amazing feature and is one that I am most looking forward to. She continues to give demos of other NeRF papers that were built ontop of nerfstudio including InstructNeRF2NeRF, Language Embedded Radiance Fields, LERF-TOGO, and NeRSemble.

One question that caught my eye was when someone asked Professor Kanazawa what use case she's most interested in and she replied with generative 3D. Of course since this talk was given, Luma AI released their generative 3D platform, Genie. She also touched on the excitement behind generative 3D scenes, some of which have made their way onto social media over the last week. However, I have it on authority from top end NeRF engineers across multiple NeRF companies, that platforms purporting to have solved this problem, have not. This remains an active field of research and like Professor Kanazawa, I am excited to see when it exists.

Professor Kanazawa also touched on 4D dynamic scenes, which is another area of great interest.

An interesting question that was posed to Professor Kanazawa was when someone asked when will we be seeing 4D movies and TV. Her answer was quite reassuring in that it's going to come down to a rendering question, but that it's more of a business or startup question for when it begins to be pursued. Very interesting implications for the entertainment industry. Pay attention to what she's saying. This does come with a big caveat of needing the scene to remain static, but you have to start somewhere!

The talk by Professor Kanazawa at Stanford University's event not only highlighted the current achievements in NeRFs, but also painted a vision of a future rich with dynamic 3D and 4D content. As we stand on the brink of these technological marvels, the line between reality and digital creation continues to blur, promising new horizons for creativity, science, and society at large. The excitement in the field is palpable, and with each research paper and development, we inch closer to realizing these once-fictional dreams. With such rapid advancements, one can only speculate how soon these innovations will find their way into our daily lives, reshaping entertainment, education, and beyond.

Watch the full talk below!


Featured

Featured

Featured

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff