|国家预印本平台
首页|ST-GDance: Long-Term and Collision-Free Group Choreography from Music

ST-GDance: Long-Term and Collision-Free Group Choreography from Music

ST-GDance: Long-Term and Collision-Free Group Choreography from Music

来源:Arxiv_logoArxiv
英文摘要

Group dance generation from music has broad applications in film, gaming, and animation production. However, it requires synchronizing multiple dancers while maintaining spatial coordination. As the number of dancers and sequence length increase, this task faces higher computational complexity and a greater risk of motion collisions. Existing methods often struggle to model dense spatial-temporal interactions, leading to scalability issues and multi-dancer collisions. To address these challenges, we propose ST-GDance, a novel framework that decouples spatial and temporal dependencies to optimize long-term and collision-free group choreography. We employ lightweight graph convolutions for distance-aware spatial modeling and accelerated sparse attention for efficient temporal modeling. This design significantly reduces computational costs while ensuring smooth and collision-free interactions. Experiments on the AIOZ-GDance dataset demonstrate that ST-GDance outperforms state-of-the-art baselines, particularly in generating long and coherent group dance sequences. Project page: https://yilliajing.github.io/ST-GDance-Website/.

Jing Xu、Weiqiang Wang、Cunjian Chen、Jun Liu、Qiuhong Ke

计算技术、计算机技术

Jing Xu,Weiqiang Wang,Cunjian Chen,Jun Liu,Qiuhong Ke.ST-GDance: Long-Term and Collision-Free Group Choreography from Music[EB/OL].(2025-07-30)[2025-08-11].https://arxiv.org/abs/2507.21518.点此复制

评论