Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 2.27 KB

2412.04433.md

File metadata and controls

7 lines (5 loc) · 2.27 KB

PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars

This paper introduces a novel clothed human model that can be learned from multiview RGB videos, with a particular emphasis on recovering physically accurate body and cloth movements. Our method, Position Based Dynamic Gaussians (PBDyG), realizes ''movement-dependent'' cloth deformation via physical simulation, rather than merely relying on ''pose-dependent'' rigid transformations. We model the clothed human holistically but with two distinct physical entities in contact: clothing modeled as 3D Gaussians, which are attached to a skinned SMPL body that follows the movement of the person in the input videos. The articulation of the SMPL body also drives physically-based simulation of the clothes' Gaussians to transform the avatar to novel poses. In order to run position based dynamics simulation, physical properties including mass and material stiffness are estimated from the RGB videos through Dynamic 3D Gaussian Splatting. Experiments demonstrate that our method not only accurately reproduces appearance but also enables the reconstruction of avatars wearing highly deformable garments, such as skirts or coats, which have been challenging to reconstruct using existing methods.

本文提出了一种新颖的着装人体模型,可从多视角 RGB 视频中学习,特别强调恢复物理精确的身体与服装运动。我们的方法 基于位置的动态高斯(Position Based Dynamic Gaussians, PBDyG),通过物理模拟实现了“运动相关”的服装变形,而不仅仅依赖于“姿态相关”的刚性变换。 我们整体建模着装人体,但将其分为两个接触的物理实体:服装被建模为三维高斯,与一个经过蒙皮的 SMPL 人体相连接,SMPL 人体根据输入视频中的人物动作进行移动。同时,SMPL 人体的关节驱动服装高斯的基于物理的模拟,从而将虚拟人转换到新的姿态。 为了进行基于位置的动力学模拟,物理属性(包括质量和材料刚度)通过动态三维高斯点绘从 RGB 视频中估计而得。实验表明,我们的方法不仅能准确再现外观,还能够重建穿着高度可变形服装(如裙子或外套)的虚拟人,这对于现有方法而言一直是一个挑战。