PhysAvatar's Dynamic Dances

Michael Rubloff

Michael Rubloff

Apr 9, 2024

Email
Copy Link
Twitter
Linkedin
Reddit
Whatsapp

Playing as yourself in a video game has always seemed like a fun idea, though perhaps not in GTA. Now, we're one step closer to making that a reality with PhysAvatar.

Throughout the paper, they reference something called Inverse Rendering, a process that encompasses methods like NeRFs and Gaussian Splatting. Inverse Rendering is essentially about creating 3D objects from 2D origins—a concept that might sound familiar.

PhysAvatar uses dynamic Gaussian Splatting and "inverse physics" to estimate and model the physical parameters of clothing. This offers a comprehensive solution to the long-standing challenge of reconstructing dynamic 3D avatars.

It can be broken down into three main parts: mesh tracking, physics-based dynamic modeling, and physics-based appearance refinement.

Using video and a mesh as input, they adapt 3D Gaussians (they use Dynamic 3D Gaussians) to align with the mesh surface at the outset. This alignment involves positioning, orienting, and scaling the Gaussians to match the initial configuration of the garment's mesh, ensuring that they accurately represent the garment's state at the start of the video sequence.

As the avatar moves, the positions and orientations of the 3D Gaussians are dynamically adjusted to follow the deformations of the garment's mesh. This process is facilitated by optimizing the parameters of the Gaussians—position, rotation, scale, and opacity—to minimize the difference between the rendered images and the reference images from the video data.

The optimized parameters of the 3D Gaussians inform the reconstruction of the garment's mesh across frames. This reconstruction is not static but evolves with the avatar's movements, capturing the nuanced dynamics of clothing, such as wrinkles and draping effects, that are critical for realism.

After tracking the meshes, the physical properties of the garments are estimated to ensure their dynamics match observed movements. Once this is complete, these parameters can be applied to new views and motions.

It operates by first establishing dense correspondences across video frames, followed by optimizing fabric physical parameters through an inverse physics approach, and finally refining appearance using a physically based inverse renderer. This ensures avatars can be rendered with high fidelity in any lighting environment, addressing the challenge of baked-in shadows and other artifacts that can detract from realism.

By embedding physics-based simulation into the avatar reconstruction process, PhysAvatar can model the dynamics of clothing with remarkable accuracy, capturing complex behaviors like friction and collision.

Interestingly their outputs can be directly exported into game engines, such as Blender. There is a placeholder for the code to be released, but it is both unclear when that will be and how it will be licensed.

When HumanRF and the Actors dataset (used in PhysAvatar) came out, I spoke about the potential impact of the fashion industry and here's a closer glimpse of that reality. Imagine fashion shows that take place asynchronously and also out of your living room. There's still quite a bit of work ahead of us to get there, but it is achievable and I believe inevitable.

While PhysAvatar represents a significant leap forward, it also highlights areas for further research and development. The current need for manual garment segmentation and mesh unwrapping calls for automation to streamline the process. Additionally, improving the accuracy of the underlying human body model used for garment simulation will enhance collision detection and overall realism. Adapting the framework to work with sparse view captures will also broaden its applicability, making high-fidelity avatar reconstruction accessible in less controlled environments.

Featured

Recents

Featured

News

Sony Alpha 9 III and Radiance Fields

Sony's A9 III packs a full frame global shutter, making it an incredible tool for capturing radiance fields.

Michael Rubloff

Jan 14, 2025

News

Sony Alpha 9 III and Radiance Fields

Sony's A9 III packs a full frame global shutter, making it an incredible tool for capturing radiance fields.

Michael Rubloff

Jan 14, 2025

News

Sony Alpha 9 III and Radiance Fields

Sony's A9 III packs a full frame global shutter, making it an incredible tool for capturing radiance fields.

Michael Rubloff

Platforms

Tripo WebApp v2.0

Tripo is giving a refresh to their WebApp and more!

Michael Rubloff

Jan 11, 2025

Platforms

Tripo WebApp v2.0

Tripo is giving a refresh to their WebApp and more!

Michael Rubloff

Jan 11, 2025

Platforms

Tripo WebApp v2.0

Tripo is giving a refresh to their WebApp and more!

Michael Rubloff

News

Jensen Huang unveils two Radiance Fields in CES Keynote

Jensen welcomed CES attendees into NVIDIA through Radiance Fields.

Michael Rubloff

Jan 9, 2025

News

Jensen Huang unveils two Radiance Fields in CES Keynote

Jensen welcomed CES attendees into NVIDIA through Radiance Fields.

Michael Rubloff

Jan 9, 2025

News

Jensen Huang unveils two Radiance Fields in CES Keynote

Jensen welcomed CES attendees into NVIDIA through Radiance Fields.

Michael Rubloff

Platforms

StorySplat v1.3.6.1: Introducing SplatSwap and Walk Mode

A Discover page and more comes to StorySplat!

Michael Rubloff

Jan 9, 2025

Platforms

StorySplat v1.3.6.1: Introducing SplatSwap and Walk Mode

A Discover page and more comes to StorySplat!

Michael Rubloff

Jan 9, 2025

Platforms

StorySplat v1.3.6.1: Introducing SplatSwap and Walk Mode

A Discover page and more comes to StorySplat!

Michael Rubloff