InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning
InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning
Leveraging pretrained Vision-Language Models (VLMs) to map language instruction and visual observations to raw low-level actions, Vision-Language-Action models (VLAs) hold great promise for achieving general-purpose robotic systems. Despite their advancements, existing VLAs tend to spuriously correlate task-irrelevant visual features with actions, limiting their generalization capacity beyond the training data. To tackle this challenge, we propose Intrinsic Spatial Reasoning (InSpire), a simple yet effective approach that mitigates the adverse effects of spurious correlations by boosting the spatial reasoning ability of VLAs. Specifically, InSpire redirects the VLA's attention to task-relevant factors by prepending the question "In which direction is the [object] relative to the robot?" to the language instruction and aligning the answer "right/left/up/down/front/back/grasped" and predicted actions with the ground-truth. Notably, InSpire can be used as a plugin to enhance existing autoregressive VLAs, requiring no extra training data or interaction with other large models. Extensive experimental results in both simulation and real-world environments demonstrate the effectiveness and flexibility of our approach. Our code, pretrained models and demos are publicly available at: https://Koorye.github.io/proj/Inspire.
Ji Zhang、Shihan Wu、Xu Luo、Hao Wu、Lianli Gao、Heng Tao Shen、Jingkuan Song
计算技术、计算机技术
Ji Zhang,Shihan Wu,Xu Luo,Hao Wu,Lianli Gao,Heng Tao Shen,Jingkuan Song.InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning[EB/OL].(2025-05-19)[2025-06-04].https://arxiv.org/abs/2505.13888.点此复制
评论