|国家预印本平台
首页|General-Purpose Robotic Navigation via LVLM-Orchestrated Perception, Reasoning, and Acting

General-Purpose Robotic Navigation via LVLM-Orchestrated Perception, Reasoning, and Acting

General-Purpose Robotic Navigation via LVLM-Orchestrated Perception, Reasoning, and Acting

来源:Arxiv_logoArxiv
英文摘要

Developing general-purpose navigation policies for unknown environments remains a core challenge in robotics. Most existing systems rely on task-specific neural networks and fixed data flows, limiting generalizability. Large Vision-Language Models (LVLMs) offer a promising alternative by embedding human-like knowledge suitable for reasoning and planning. Yet, prior LVLM-robot integrations typically depend on pre-mapped spaces, hard-coded representations, and myopic exploration. We introduce the Agentic Robotic Navigation Architecture (ARNA), a general-purpose navigation framework that equips an LVLM-based agent with a library of perception, reasoning, and navigation tools available within modern robotic stacks. At runtime, the agent autonomously defines and executes task-specific workflows that iteratively query the robotic modules, reason over multimodal inputs, and select appropriate navigation actions. This approach enables robust navigation and reasoning in previously unmapped environments, providing a new perspective on robotic stack design. Evaluated in Habitat Lab on the HM-EQA benchmark, ARNA achieves state-of-the-art performance, demonstrating effective exploration, navigation, and embodied question answering without relying on handcrafted plans, fixed input representations, or pre-existing maps.

Bernard Lange、Anil Yildiz、Mansur Arief、Shehryar Khattak、Mykel Kochenderfer、Georgios Georgakis

自动化技术、自动化技术设备

Bernard Lange,Anil Yildiz,Mansur Arief,Shehryar Khattak,Mykel Kochenderfer,Georgios Georgakis.General-Purpose Robotic Navigation via LVLM-Orchestrated Perception, Reasoning, and Acting[EB/OL].(2025-06-20)[2025-07-23].https://arxiv.org/abs/2506.17462.点此复制

评论