EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation
EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation
We propose Expotion (Facial Expression and Motion Control for Multimodal Music Generation), a generative model leveraging multimodal visual controls - specifically, human facial expressions and upper-body motion - as well as text prompts to produce expressive and temporally accurate music. We adopt parameter-efficient fine-tuning (PEFT) on the pretrained text-to-music generation model, enabling fine-grained adaptation to the multimodal controls using a small dataset. To ensure precise synchronization between video and music, we introduce a temporal smoothing strategy to align multiple modalities. Experiments demonstrate that integrating visual features alongside textual descriptions enhances the overall quality of generated music in terms of musicality, creativity, beat-tempo consistency, temporal alignment with the video, and text adherence, surpassing both proposed baselines and existing state-of-the-art video-to-music generation models. Additionally, we introduce a novel dataset consisting of 7 hours of synchronized video recordings capturing expressive facial and upper-body gestures aligned with corresponding music, providing significant potential for future research in multimodal and interactive music generation.
Fathinah Izzati、Xinyue Li、Gus Xia
计算技术、计算机技术
Fathinah Izzati,Xinyue Li,Gus Xia.EXPOTION: Facial Expression and Motion Control for Multimodal Music Generation[EB/OL].(2025-07-07)[2025-07-21].https://arxiv.org/abs/2507.04955.点此复制
评论