Multi-Person Interaction Generation from Two-Person Motion Priors
Multi-Person Interaction Generation from Two-Person Motion Priors
Generating realistic human motion with high-level controls is a crucial task for social understanding, robotics, and animation. With high-quality MOCAP data becoming more available recently, a wide range of data-driven approaches have been presented. However, modelling multi-person interactions still remains a less explored area. In this paper, we present Graph-driven Interaction Sampling, a method that can generate realistic and diverse multi-person interactions by leveraging existing two-person motion diffusion models as motion priors. Instead of training a new model specific to multi-person interaction synthesis, our key insight is to spatially and temporally separate complex multi-person interactions into a graph structure of two-person interactions, which we name the Pairwise Interaction Graph. We thus decompose the generation task into simultaneous single-person motion generation conditioned on one other's motion. In addition, to reduce artifacts such as interpenetrations of body parts in generated multi-person interactions, we introduce two graph-dependent guidance terms into the diffusion sampling scheme. Unlike previous work, our method can produce various high-quality multi-person interactions without having repetitive individual motions. Extensive experiments demonstrate that our approach consistently outperforms existing methods in reducing artifacts when generating a wide range of two-person and multi-person interactions.
Wenning Xu、Shiyu Fan、Paul Henderson、Edmond S. L. Ho
计算技术、计算机技术
Wenning Xu,Shiyu Fan,Paul Henderson,Edmond S. L. Ho.Multi-Person Interaction Generation from Two-Person Motion Priors[EB/OL].(2025-05-23)[2025-06-14].https://arxiv.org/abs/2505.17860.点此复制
评论