Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control
Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control
Recent advances in video diffusion models have demonstrated strong potential for generating robotic decision-making data, with trajectory conditions further enabling fine-grained control. However, existing trajectory-based methods primarily focus on individual object motion and struggle to capture multi-object interaction crucial in complex robotic manipulation. This limitation arises from multi-feature entanglement in overlapping regions, which leads to degraded visual fidelity. To address this, we present RoboMaster, a novel framework that models inter-object dynamics through a collaborative trajectory formulation. Unlike prior methods that decompose objects, our core is to decompose the interaction process into three sub-stages: pre-interaction, interaction, and post-interaction. Each stage is modeled using the feature of the dominant object, specifically the robotic arm in the pre- and post-interaction phases and the manipulated object during interaction, thereby mitigating the drawback of multi-object feature fusion present during interaction in prior work. To further ensure subject semantic consistency throughout the video, we incorporate appearance- and shape-aware latent representations for objects. Extensive experiments on the challenging Bridge V2 dataset, as well as in-the-wild evaluation, demonstrate that our method outperforms existing approaches, establishing new state-of-the-art performance in trajectory-controlled video generation for robotic manipulation.
Xiao Fu、Xintao Wang、Xian Liu、Jianhong Bai、Runsen Xu、Pengfei Wan、Di Zhang、Dahua Lin
计算技术、计算机技术自动化技术、自动化技术设备
Xiao Fu,Xintao Wang,Xian Liu,Jianhong Bai,Runsen Xu,Pengfei Wan,Di Zhang,Dahua Lin.Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2506.01943.点此复制
评论