ProphetDWM: A Driving World Model for Rolling Out Future Actions and Videos
ProphetDWM: A Driving World Model for Rolling Out Future Actions and Videos
Real-world driving requires people to observe the current environment, anticipate the future, and make appropriate driving decisions. This requirement is aligned well with the capabilities of world models, which understand the environment and predict the future. However, recent world models in autonomous driving are built explicitly, where they could predict the future by controllable driving video generation. We argue that driving world models should have two additional abilities: action control and action prediction. Following this line, previous methods are limited because they predict the video requires given actions of the same length as the video and ignore the dynamical action laws. To address these issues, we propose ProphetDWM, a novel end-to-end driving world model that jointly predicts future videos and actions. Our world model has an action module to learn latent action from the present to the future period by giving the action sequence and observations. And a diffusion-model-based transition module to learn the state distribution. The model is jointly trained by learning latent actions given finite states and predicting action and video. The joint learning connects the action dynamics and states and enables long-term future prediction. We evaluate our method in video generation and action prediction tasks on the Nuscenes dataset. Compared to the state-of-the-art methods, our method achieves the best video consistency and best action prediction accuracy, while also enabling high-quality long-term video and action generation.
Xiaodong Wang、Peixi Peng
自动化技术、自动化技术设备计算技术、计算机技术
Xiaodong Wang,Peixi Peng.ProphetDWM: A Driving World Model for Rolling Out Future Actions and Videos[EB/OL].(2025-05-24)[2025-06-06].https://arxiv.org/abs/2505.18650.点此复制
评论