|国家预印本平台
首页|Neural Motion Simulator: Pushing the Limit of World Models in Reinforcement Learning

Neural Motion Simulator: Pushing the Limit of World Models in Reinforcement Learning

Neural Motion Simulator: Pushing the Limit of World Models in Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

An embodied system must not only model the patterns of the external world but also understand its own motion dynamics. A motion dynamic model is essential for efficient skill acquisition and effective planning. In this work, we introduce the neural motion simulator (MoSim), a world model that predicts the future physical state of an embodied system based on current observations and actions. MoSim achieves state-of-the-art performance in physical state prediction and provides competitive performance across a range of downstream tasks. This works shows that when a world model is accurate enough and performs precise long-horizon predictions, it can facilitate efficient skill acquisition in imagined worlds and even enable zero-shot reinforcement learning. Furthermore, MoSim can transform any model-free reinforcement learning (RL) algorithm into a model-based approach, effectively decoupling physical environment modeling from RL algorithm development. This separation allows for independent advancements in RL algorithms and world modeling, significantly improving sample efficiency and enhancing generalization capabilities. Our findings highlight that world models for motion dynamics is a promising direction for developing more versatile and capable embodied systems.

Chenjie Hao、Weyl Lu、Yifan Xu、Yubei Chen

计算技术、计算机技术自动化技术、自动化技术设备

Chenjie Hao,Weyl Lu,Yifan Xu,Yubei Chen.Neural Motion Simulator: Pushing the Limit of World Models in Reinforcement Learning[EB/OL].(2025-04-09)[2025-05-13].https://arxiv.org/abs/2504.07095.点此复制

评论