|国家预印本平台
首页|NavBench: Probing Multimodal Large Language Models for Embodied Navigation

NavBench: Probing Multimodal Large Language Models for Embodied Navigation

NavBench: Probing Multimodal Large Language Models for Embodied Navigation

来源:Arxiv_logoArxiv
英文摘要

Multimodal Large Language Models (MLLMs) have demonstrated strong generalization in vision-language tasks, yet their ability to understand and act within embodied environments remains underexplored. We present NavBench, a benchmark to evaluate the embodied navigation capabilities of MLLMs under zero-shot settings. NavBench consists of two components: (1) navigation comprehension, assessed through three cognitively grounded tasks including global instruction alignment, temporal progress estimation, and local observation-action reasoning, covering 3,200 question-answer pairs; and (2) step-by-step execution in 432 episodes across 72 indoor scenes, stratified by spatial, cognitive, and execution complexity. To support real-world deployment, we introduce a pipeline that converts MLLMs' outputs into robotic actions. We evaluate both proprietary and open-source models, finding that GPT-4o performs well across tasks, while lighter open-source models succeed in simpler cases. Results also show that models with higher comprehension scores tend to achieve better execution performance. Providing map-based context improves decision accuracy, especially in medium-difficulty scenarios. However, most models struggle with temporal understanding, particularly in estimating progress during navigation, which may pose a key challenge.

Yanyuan Qiao、Haodong Hong、Wenqi Lyu、Dong An、Siqi Zhang、Yutong Xie、Xinyu Wang、Qi Wu

计算技术、计算机技术

Yanyuan Qiao,Haodong Hong,Wenqi Lyu,Dong An,Siqi Zhang,Yutong Xie,Xinyu Wang,Qi Wu.NavBench: Probing Multimodal Large Language Models for Embodied Navigation[EB/OL].(2025-06-01)[2025-07-02].https://arxiv.org/abs/2506.01031.点此复制

评论