We propose VASA-3D, an audio-driven, single-shot 3D head avatar generator. This research tackles two major challenges: capturing the subtle expression details present in real human faces, and reconstructing an intricate 3D head avatar from a single portrait image. To accurately model expression details, VASA-3D leverages the motion latent of VASA-1, a method that yields exceptional realism and vividness in 2D talking heads. A critical element of our work is translating this motion latent to 3D, which is accomplished by devising a 3D head model that is conditioned on the motion latent. Customization of this model to a single image is achieved through an optimization framework that employs numerous video frames of the reference head synthesized from the input image. The optimization takes various training losses robust to artifacts and limited pose coverage in the generated training data. Our experiment shows that VASA-3D produces realistic 3D talking heads that cannot be achieved by prior art, and it supports the online generation of 512x512 free-viewpoint videos at up to 75 FPS, facilitating more immersive engagements with lifelike 3D avatars.
翻译:我们提出了VASA-3D,一种音频驱动的单次拍摄三维头部化身生成方法。本研究解决了两个主要挑战:捕捉真实人脸上细微的表情细节,以及从单张肖像图像重建复杂的三维头部化身。为了精确建模表情细节,VASA-3D利用了VASA-1的运动潜在表示,该方法在二维说话头部生成中展现出卓越的真实感和生动性。我们工作的一个关键要素是将该运动潜在表示迁移至三维空间,这是通过设计一个以运动潜在表示为条件的三维头部模型实现的。该模型针对单张图像的定制化是通过一个优化框架完成的,该框架使用了从输入图像合成的参考头部的大量视频帧。优化过程采用了多种训练损失函数,这些函数对生成训练数据中的伪影和有限姿态覆盖具有鲁棒性。实验表明,VASA-3D能够生成现有技术无法实现的逼真三维说话头部,并支持在线生成分辨率达512x512、帧率高达75 FPS的自由视角视频,从而促进与逼真三维化身更具沉浸感的交互。