Research

HOSNeRF Revolutionizes 360° Free-Viewpoint Rendering of Dynamic Human-Object-Scene from a Single Video

Michael Rubloff

Michael Rubloff

Apr 25, 2023

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp
HOSNeRF
HOSNeRF

Researchers from the National University of Singapore and Tencent BCG Business School have developed a new method called HOSNeRF (Human-Object-Scene Neural Radiance Fields) that can create 360° free-viewpoint renderings of dynamic scenes with human-environment interactions from just a single video.

Neural Radiance Fields (NeRF), have made substantial progress in novel view synthesis, particularly in reconstructing static 3D scenes based on multi-view images. However, NeRFs struggle with fast and complex human-object-scene motions and interactions, limiting their applicability in dynamic scenarios. To overcome these limitations, the researchers developed the HOSNeRF, which introduces object bones and state-conditional representations to handle the non-rigid motions and interactions of humans, objects, and the environment more effectively.

For those thinking about object bones, imagining a skeleton, you aren't far off. Object bones, which are attached to the human skeleton hierarchy, help estimate human-object deformations in a coarse-to-fine manner for the dynamic human-object model. These object bones, along with the underlying object linear blend skinning (object LBS), allow for the accurate estimation of objects' deformations through the relative transformations in the kinematic tree of the skeleton hierarchy.Just remember, the hand bones are connected to the...object bones!

The capture portion looks like something out of a Charlie Chaplin movie, but the end results are out of Upgrade.

HOSNeRF attempts (pretty successfully) solving at two challenges: complex object motions in human-object interactions and how humans interact with different objects at different times, for instance, if someone puts a book on a table and then picks it up later. They solve these two issues by introducing the new object bones into the conventional human skeleton hierarchy, which helps estimate large object deformations. For the latter, they introduce two new learnable object state embeddings that can be used as conditions for learning our human-object representation and scene representation.

Combining these yield a ~40-50% higher Learned Perceptual Image Patch Similarity (LPIPS) and it's very noticeable! In other words, HOSNeRF allows pausing a video at any time and rendering all scene details, including dynamic humans, objects, and backgrounds, from arbitrary viewpoints all from just a single video.

As the video continued on, I found myself shocked at how consistently better their method reproduced the scene.

The research team has announced that they will release the code, data, and compelling examples of 360° free-viewpoint renderings from single videos on their website, further promoting the advancement and adoption of this groundbreaking technology.

With the development of HOSNeRF, we are one step closer to bridging the gap between static and dynamic scene renderings and unlocking new possibilities in the realm of immersive experiences. As researchers continue to push the boundaries of what is possible with novel view synthesis and 360° free-viewpoint rendering, we can expect even more exciting developments and innovations in the papers to come.

Featured

Featured

Featured

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Jul 26, 2024

Research

Frustum Volume Caching

A criticism of NeRFs is their rendering rates. Quietly a couple of papers have been published over the last two months which push NeRFs into real time rates.

Michael Rubloff

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Jul 24, 2024

Research

N-Dimensional Gaussians for Fitting of High Dimensional Functions

It significantly improves the fidelity of reflections and other view-dependent effects, making scenes look more realistic.

Michael Rubloff

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Jul 22, 2024

Platforms

Luma AI launches Loops for Dream Machine

Luma AI is starting the week off hot, with the release of Loops.

Michael Rubloff

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Jul 18, 2024

Platforms

SuperSplat adds Histogram Editing

PlayCanvas is back with a new update to SuperSplat. It's the release of v0.22.2 and then the quick update to v0.24.0.

Michael Rubloff

Trending articles

Trending articles

Trending articles

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

Jun 7, 2024

Platforms

Nerfstudio Releases gsplat 1.0

Just in time for your weekend, Ruilong Li and the team at Nerfstudio are bringing a big gift.

Michael Rubloff

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

May 14, 2024

News

SIGGRAPH 2024 Program Announced

The upcoming SIGGRAPH conference catalog has been released and the conference will be filled of radiance fields!

Michael Rubloff

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

May 8, 2024

Platforms

Google CloudNeRF: Zip-NeRF and CamP in the Cloud

It doesn't seem like a lot of people know this, but you can run CamP and Zip-NeRF in the cloud, straight through Google and it's actually super easy. It’s called CloudNeRF.

Michael Rubloff

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff

Mar 15, 2024

Tools

splaTV: Dynamic Gaussian Splatting Viewer

Kevin Kwok, perhaps better known as Antimatter15, has released something amazing: splaTV.

Michael Rubloff