|国家预印本平台
首页|Model See Model Do: Speech-Driven Facial Animation with Style Control

Model See Model Do: Speech-Driven Facial Animation with Style Control

Model See Model Do: Speech-Driven Facial Animation with Style Control

来源:Arxiv_logoArxiv
英文摘要

Speech-driven 3D facial animation plays a key role in applications such as virtual avatars, gaming, and digital content creation. While existing methods have made significant progress in achieving accurate lip synchronization and generating basic emotional expressions, they often struggle to capture and effectively transfer nuanced performance styles. We propose a novel example-based generation framework that conditions a latent diffusion model on a reference style clip to produce highly expressive and temporally coherent facial animations. To address the challenge of accurately adhering to the style reference, we introduce a novel conditioning mechanism called style basis, which extracts key poses from the reference and additively guides the diffusion generation process to fit the style without compromising lip synchronization quality. This approach enables the model to capture subtle stylistic cues while ensuring that the generated animations align closely with the input speech. Extensive qualitative, quantitative, and perceptual evaluations demonstrate the effectiveness of our method in faithfully reproducing the desired style while achieving superior lip synchronization across various speech scenarios.

Yifang Pan、Karan Singh、Luiz Gustavo Hafemann

10.1145/3721238.3730672

计算技术、计算机技术

Yifang Pan,Karan Singh,Luiz Gustavo Hafemann.Model See Model Do: Speech-Driven Facial Animation with Style Control[EB/OL].(2025-05-02)[2025-06-06].https://arxiv.org/abs/2505.01319.点此复制

评论