Unified Vision-Language-Action Model
Unified Vision-Language-Action Model
Vision-language-action models (VLAs) have garnered significant attention for their potential in advancing robotic manipulation. However, previous approaches predominantly rely on the general comprehension capabilities of vision-language models (VLMs) to generate action signals, often overlooking the rich temporal and causal structure embedded in visual observations. In this paper, we present UniVLA, a unified and native multimodal VLA model that autoregressively models vision, language, and action signals as discrete token sequences. This formulation enables flexible multimodal tasks learning, particularly from large-scale video data. By incorporating world modeling during post-training, UniVLA captures causal dynamics from videos, facilitating effective transfer to downstream policy learning--especially for long-horizon tasks. Our approach sets new state-of-the-art results across several widely used simulation benchmarks, including CALVIN, LIBERO, and Simplenv-Bridge, significantly surpassing previous methods. For example, UniVLA achieves 95.5% average success rate on LIBERO benchmark, surpassing pi0-FAST's 85.5%. We further demonstrate its broad applicability on real-world ALOHA manipulation and autonomous driving.
Yuqi Wang、Xinghang Li、Wenxuan Wang、Junbo Zhang、Yingyan Li、Yuntao Chen、Xinlong Wang、Zhaoxiang Zhang
计算技术、计算机技术
Yuqi Wang,Xinghang Li,Wenxuan Wang,Junbo Zhang,Yingyan Li,Yuntao Chen,Xinlong Wang,Zhaoxiang Zhang.Unified Vision-Language-Action Model[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2506.19850.点此复制
评论