Toward Rich Video Human-Motion2D Generation
Toward Rich Video Human-Motion2D Generation
Generating realistic and controllable human motions, particularly those involving rich multi-character interactions, remains a significant challenge due to data scarcity and the complexities of modeling inter-personal dynamics. To address these limitations, we first introduce a new large-scale rich video human motion 2D dataset (Motion2D-Video-150K) comprising 150,000 video sequences. Motion2D-Video-150K features a balanced distribution of diverse single-character and, crucially, double-character interactive actions, each paired with detailed textual descriptions. Building upon this dataset, we propose a novel diffusion-based rich video human motion2D generation (RVHM2D) model. RVHM2D incorporates an enhanced textual conditioning mechanism utilizing either dual text encoders (CLIP-L/B) or T5-XXL with both global and local features. We devise a two-stage training strategy: the model is first trained with a standard diffusion objective, and then fine-tuned using reinforcement learning with an FID-based reward to further enhance motion realism and text alignment. Extensive experiments demonstrate that RVHM2D achieves leading performance on the Motion2D-Video-150K benchmark in generating both single and interactive double-character scenarios.
Ruihao Xi、Xuekuan Wang、Yongcheng Li、Shuhua Li、Zichen Wang、Yiwei Wang、Feng Wei、Cairong Zhao
计算技术、计算机技术
Ruihao Xi,Xuekuan Wang,Yongcheng Li,Shuhua Li,Zichen Wang,Yiwei Wang,Feng Wei,Cairong Zhao.Toward Rich Video Human-Motion2D Generation[EB/OL].(2025-06-17)[2025-06-29].https://arxiv.org/abs/2506.14428.点此复制
评论