Scalable Motion In-betweening via Diffusion and Physics-Based Character Adaptation
Scalable Motion In-betweening via Diffusion and Physics-Based Character Adaptation
We propose a two-stage framework for motion in-betweening that combines diffusion-based motion generation with physics-based character adaptation. In Stage 1, a character-agnostic diffusion model synthesizes transitions from sparse keyframes on a canonical skeleton, allowing the same model to generalize across diverse characters. In Stage 2, a reinforcement learning-based controller adapts the canonical motion to the target character's morphology and dynamics, correcting artifacts and enhancing stylistic realism. This design supports scalable motion generation across characters with diverse skeletons without retraining the entire model. Experiments on standard benchmarks and stylized characters demonstrate that our method produces physically plausible, style-consistent motions under sparse and long-range constraints.
Jia Qin
计算技术、计算机技术
Jia Qin.Scalable Motion In-betweening via Diffusion and Physics-Based Character Adaptation[EB/OL].(2025-04-12)[2025-05-09].https://arxiv.org/abs/2504.09413.点此复制
评论