|国家预印本平台
首页|MoDA: Multi-modal Diffusion Architecture for Talking Head Generation

MoDA: Multi-modal Diffusion Architecture for Talking Head Generation

MoDA: Multi-modal Diffusion Architecture for Talking Head Generation

来源:Arxiv_logoArxiv
英文摘要

Talking head generation with arbitrary identities and speech audio remains a crucial problem in the realm of digital humans and the virtual metaverse. Recently, diffusion models have become a popular generative technique in this field with their strong generation and generalization capabilities. However, several challenges remain for diffusion-based methods: 1) inefficient inference and visual artifacts, which arise from the implicit latent space of Variational Auto-Encoders (VAE), complicating the diffusion process; 2) authentic facial expressions and head movements, resulting from insufficient multi-modal information interaction. In this paper, MoDA handle these challenges by 1) defines a joint parameter space to bridge motion generation and neural rendering, and leverages flow matching to simplify the diffusion learning process; 2) introduces a multi-modal diffusion architecture to model the interaction among noisy motion, audio, and auxiliary conditions, ultimately enhancing overall facial expressiveness. Subsequently, a coarse-to-fine fusion strategy is adopted to progressively integrate different modalities, ensuring effective integration across feature spaces. Experimental results demonstrate that MoDA significantly improves video diversity, realism, and efficiency, making it suitable for real-world applications.

Xinyang Li、Gen Li、Zhihui Lin、Yichen Qian、GongXin Yao、Weinan Jia、Weihua Chen、Fan Wang

计算技术、计算机技术

Xinyang Li,Gen Li,Zhihui Lin,Yichen Qian,GongXin Yao,Weinan Jia,Weihua Chen,Fan Wang.MoDA: Multi-modal Diffusion Architecture for Talking Head Generation[EB/OL].(2025-07-04)[2025-07-25].https://arxiv.org/abs/2507.03256.点此复制

评论