Can Pretrained Vision-Language Embeddings Alone Guide Robot Navigation?
Can Pretrained Vision-Language Embeddings Alone Guide Robot Navigation?
Foundation models have revolutionized robotics by providing rich semantic representations without task-specific training. While many approaches integrate pretrained vision-language models (VLMs) with specialized navigation architectures, the fundamental question remains: can these pretrained embeddings alone successfully guide navigation without additional fine-tuning or specialized modules? We present a minimalist framework that decouples this question by training a behavior cloning policy directly on frozen vision-language embeddings from demonstrations collected by a privileged expert. Our approach achieves a 74% success rate in navigation to language-specified targets, compared to 100% for the state-aware expert, though requiring 3.2 times more steps on average. This performance gap reveals that pretrained embeddings effectively support basic language grounding but struggle with long-horizon planning and spatial reasoning. By providing this empirical baseline, we highlight both the capabilities and limitations of using foundation models as drop-in representations for embodied tasks, offering critical insights for robotics researchers facing practical design tradeoffs between system complexity and performance in resource-constrained scenarios. Our code is available at https://github.com/oadamharoon/text2nav
Nitesh Subedi、Adam Haroon、Shreyan Ganguly、Samuel T. K. Tetteh、Prajwal Koirala、Cody Fleming、Soumik Sarkar
自动化基础理论计算技术、计算机技术
Nitesh Subedi,Adam Haroon,Shreyan Ganguly,Samuel T. K. Tetteh,Prajwal Koirala,Cody Fleming,Soumik Sarkar.Can Pretrained Vision-Language Embeddings Alone Guide Robot Navigation?[EB/OL].(2025-06-17)[2025-07-16].https://arxiv.org/abs/2506.14507.点此复制
评论