Weakly-supervised VLM-guided Partial Contrastive Learning for Visual Language Navigation
Weakly-supervised VLM-guided Partial Contrastive Learning for Visual Language Navigation
Visual Language Navigation (VLN) is a fundamental task within the field of Embodied AI, focusing on the ability of agents to navigate complex environments based on natural language instructions. Despite the progress made by existing methods, these methods often present some common challenges. First, they rely on pre-trained backbone models for visual perception, which struggle with the dynamic viewpoints in VLN scenarios. Second, the performance is limited when using pre-trained LLMs or VLMs without fine-tuning, due to the absence of VLN domain knowledge. Third, while fine-tuning LLMs and VLMs can improve results, their computational costs are higher than those without fine-tuning. To address these limitations, we propose Weakly-supervised Partial Contrastive Learning (WPCL), a method that enhances an agent's ability to identify objects from dynamic viewpoints in VLN scenarios by effectively integrating pre-trained VLM knowledge into the perception process, without requiring VLM fine-tuning. Our method enhances the agent's ability to interpret and respond to environmental cues while ensuring computational efficiency. Experimental results have shown that our method outperforms the baseline methods on multiple benchmarks, which validate the effectiveness, robustness and generalizability of our method.
Ruoyu Wang、Tong Yu、Junda Wu、Yao Liu、Julian McAuley、Lina Yao
计算技术、计算机技术
Ruoyu Wang,Tong Yu,Junda Wu,Yao Liu,Julian McAuley,Lina Yao.Weakly-supervised VLM-guided Partial Contrastive Learning for Visual Language Navigation[EB/OL].(2025-06-18)[2025-07-03].https://arxiv.org/abs/2506.15757.点此复制
评论