|国家预印本平台
首页|Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation

Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation

Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation

来源:Arxiv_logoArxiv
英文摘要

Co-speech gesture generation is crucial for creating lifelike avatars and enhancing human-computer interactions by synchronizing gestures with speech. Despite recent advancements, existing methods struggle with accurately identifying the rhythmic or semantic triggers from audio for generating contextualized gesture patterns and achieving pixel-level realism. To address these challenges, we introduce Contextual Gesture, a framework that improves co-speech gesture video generation through three innovative components: (1) a chronological speech-gesture alignment that temporally connects two modalities, (2) a contextualized gesture tokenization that incorporate speech context into motion pattern representation through distillation, and (3) a structure-aware refinement module that employs edge connection to link gesture keypoints to improve video generation. Our extensive experiments demonstrate that Contextual Gesture not only produces realistic and speech-aligned gesture videos but also supports long-sequence generation and video gesture editing applications, shown in Fig.1.

Ari Sharpio、Pengfei Zhang、Pablo Garrido、Kyle Olszewski、Hyeongwoo Kim、Pinxin Liu

计算技术、计算机技术

Ari Sharpio,Pengfei Zhang,Pablo Garrido,Kyle Olszewski,Hyeongwoo Kim,Pinxin Liu.Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation[EB/OL].(2025-08-04)[2025-08-19].https://arxiv.org/abs/2502.07239.点此复制

评论