|国家预印本平台
首页|Spatial-Temporal Graph Mamba for Music-Guided Dance Video Synthesis

Spatial-Temporal Graph Mamba for Music-Guided Dance Video Synthesis

Spatial-Temporal Graph Mamba for Music-Guided Dance Video Synthesis

来源:Arxiv_logoArxiv
英文摘要

We propose a novel spatial-temporal graph Mamba (STG-Mamba) for the music-guided dance video synthesis task, i.e., to translate the input music to a dance video. STG-Mamba consists of two translation mappings: music-to-skeleton translation and skeleton-to-video translation. In the music-to-skeleton translation, we introduce a novel spatial-temporal graph Mamba (STGM) block to effectively construct skeleton sequences from the input music, capturing dependencies between joints in both the spatial and temporal dimensions. For the skeleton-to-video translation, we propose a novel self-supervised regularization network to translate the generated skeletons, along with a conditional image, into a dance video. Lastly, we collect a new skeleton-to-video translation dataset from the Internet, containing 54,944 video clips. Extensive experiments demonstrate that STG-Mamba achieves significantly better results than existing methods.

Hao Tang、Ling Shao、Zhenyu Zhang、Luc Van Gool、Nicu Sebe

计算技术、计算机技术

Hao Tang,Ling Shao,Zhenyu Zhang,Luc Van Gool,Nicu Sebe.Spatial-Temporal Graph Mamba for Music-Guided Dance Video Synthesis[EB/OL].(2025-07-09)[2025-07-16].https://arxiv.org/abs/2507.06689.点此复制

评论