Live 3D Portraits from a Single Image Shown by NVIDIA

Michael Rubloff

Michael Rubloff

May 3, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
NVIDIA Realtime Radiance Fields
NVIDIA Realtime Radiance Fields

When NVIDIA first announced Maxine and the ability to have eye contact be redirected on a video call, my own eyes nearly popped out of my skull.

Today I had a similar reaction to Live 3D Portraits released by NVIDIA, UC San Diego, and Stanford University that is able to render a photorealistic 3D representation off a single photo. What!? It goes without saying that this opens doors for vast improvements in video conferences, AR. and VR. I want Apple's Animoji, but with photorealistic representations through the Live 3D Portrait.

The method is also able to run at 24fps, which should not be jittery or look out of place and avoids the expensive GAN inversion process.

Look at this video; look at it! How is this possible? You may have guessed what helps power it. Neural radiance fields.

I find myself occasionally thinking that the outputs are good, but still could be improved and then I remember this is being generated off of a literal single image. It's not a great surprise that the authors mention when the input image is a strong profile, the method can struggle. To be honest, I'm more impressed it's able to recognize anything coherent from a profiled view.

Some other limitations show the importance of Vision Transformer (ViT) layers, that help it learn minute and view dependent details, such as the spider tattoos below. Additionally, focal length, principal point, and camera roll, are all critical to allow for realistic outputs.

The video demonstrates that the Live 3D Portrait also works with glasses and headphones. While it's not shown, I am curious to see how hats affect it. It appears that Live 3D Portraits are ready to go, without needing to do any calibration or tuning for a person using it for the first time.

But it doesn't stop there.

The Live 3D Portraits add another dimension to these stylized images and the characters that have been created through them. I am excited to see how this helps lower the barrier to entry for animations for short films and monologues. I cannot wait to see what people create from this!

But it doesn't stop there either. Have you ever seen a cat image, where you felt like you wanted to make it three dimensional?

Personally these cat examples just remind me of the last time I had a laser pointer.

We showcase our results on human and cat face categories in this paper, but the methodology
can apply to any category for which 3D-aware image generators are available.

Real-Time Radiance Field

It's almost a footnote at the end of the paper, but they are planning on extending this technology to objects and reconstructions of real-world objects and interactive 3D visualization from a picture. This truly is amazing and could be pushing us very quickly towards a future that is not dissimilar from Harry Potter.

Moreover, I am excited to see other NeRF papers get incorporated into this technology such as Instruct-NeRF2NeRF, so that we can all have real time mustaches during our Zoom calls.

Featured

Recents

Featured

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Oct 3, 2024

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Oct 3, 2024

Research

How EVER (Exact Volumetric Ellipsoid Rendering) Does This Work?

Another Ray Tracing Radiance Field emerges, this time from Google.

Michael Rubloff

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Oct 2, 2024

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Oct 2, 2024

Platforms

DigitalCarbon Joins Y Combinator for Radiance Field Solutions

New company, DigitalCarbon has been accepted into Y Combinator to pursue Radiance Field reconstructions.

Michael Rubloff

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Oct 1, 2024

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Oct 1, 2024

Platforms

Chaos V-Ray 7 to support Gaussian Splatting

3DGS is now part of V Ray 7's beta, paving the way for use in platforms like 3ds Max and Maya.

Michael Rubloff

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff

Sep 30, 2024

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff

Sep 30, 2024

Platforms

Kiri Engine Gaussian Splatting Blender Add-On

Industry standard platform, Blender is getting another big 3DGS boost. This time from Kiri Engine.

Michael Rubloff