IRASim: A Fine-Grained World Model for Robot Manipulation
IRASim: A Fine-Grained World Model for Robot Manipulation
World models allow autonomous agents to plan and explore by predicting the visual outcomes of different actions. However, for robot manipulation, it is challenging to accurately model the fine-grained robot-object interaction within the visual space using existing methods which overlooks precise alignment between each action and the corresponding frame. In this paper, we present IRASim, a novel world model capable of generating videos with fine-grained robot-object interaction details, conditioned on historical observations and robot action trajectories. We train a diffusion transformer and introduce a novel frame-level action-conditioning module within each transformer block to explicitly model and strengthen the action-frame alignment. Extensive experiments show that: (1) the quality of the videos generated by our method surpasses all the baseline methods and scales effectively with increased model size and computation; (2) policy evaluations using IRASim exhibit a strong correlation with those using the ground-truth simulator, highlighting its potential to accelerate real-world policy evaluation; (3) testing-time scaling through model-based planning with IRASim significantly enhances policy performance, as evidenced by an improvement in the IoU metric on the Push-T benchmark from 0.637 to 0.961; (4) IRASim provides flexible action controllability, allowing virtual robotic arms in datasets to be controlled via a keyboard or VR controller.
Fangqi Zhu、Hongtao Wu、Song Guo、Yuxiao Liu、Chilam Cheang、Tao Kong
计算技术、计算机技术自动化技术、自动化技术设备
Fangqi Zhu,Hongtao Wu,Song Guo,Yuxiao Liu,Chilam Cheang,Tao Kong.IRASim: A Fine-Grained World Model for Robot Manipulation[EB/OL].(2025-07-29)[2025-08-11].https://arxiv.org/abs/2406.14540.点此复制
评论