LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation
LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation
Predictive manipulation has recently gained considerable attention in the Embodied AI community due to its potential to improve robot policy performance by leveraging predicted states. However, generating accurate future visual states of robot-object interactions from world models remains a well-known challenge, particularly in achieving high-quality pixel-level representations. To this end, we propose LaDi-WM, a world model that predicts the latent space of future states using diffusion modeling. Specifically, LaDi-WM leverages the well-established latent space aligned with pre-trained Visual Foundation Models (VFMs), which comprises both geometric features (DINO-based) and semantic features (CLIP-based). We find that predicting the evolution of the latent space is easier to learn and more generalizable than directly predicting pixel-level images. Building on LaDi-WM, we design a diffusion policy that iteratively refines output actions by incorporating forecasted states, thereby generating more consistent and accurate results. Extensive experiments on both synthetic and real-world benchmarks demonstrate that LaDi-WM significantly enhances policy performance by 27.9\% on the LIBERO-LONG benchmark and 20\% on the real-world scenario. Furthermore, our world model and policies achieve impressive generalizability in real-world experiments.
Yuhang Huang、JIazhao Zhang、Shilong Zou、XInwang Liu、Ruizhen Hu、Kai Xu
计算技术、计算机技术
Yuhang Huang,JIazhao Zhang,Shilong Zou,XInwang Liu,Ruizhen Hu,Kai Xu.LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation[EB/OL].(2025-05-13)[2025-06-13].https://arxiv.org/abs/2505.11528.点此复制
评论