|国家预印本平台
首页|VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation

VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation

VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation

来源:Arxiv_logoArxiv
英文摘要

Path planning is a fundamental capability of autonomous Unmanned Aerial Vehicles (UAVs), enabling them to efficiently navigate toward a target region or explore complex environments while avoiding obstacles. Traditional pathplanning methods, such as Rapidly-exploring Random Trees (RRT), have proven effective but often encounter significant challenges. These include high search space complexity, suboptimal path quality, and slow convergence, issues that are particularly problematic in high-stakes applications like disaster response, where rapid and efficient planning is critical. To address these limitations and enhance path-planning efficiency, we propose Vision Language Model RRT (VLM-RRT), a hybrid approach that integrates the pattern recognition capabilities of Vision Language Models (VLMs) with the path-planning strengths of RRT. By leveraging VLMs to provide initial directional guidance based on environmental snapshots, our method biases sampling toward regions more likely to contain feasible paths, significantly improving sampling efficiency and path quality. Extensive quantitative and qualitative experiments with various state-of-the-art VLMs demonstrate the effectiveness of this proposed approach.

Jianlin Ye、Savvas Papaioannou、Panayiotis Kolios

10.1109/ICUAS65942.2025.11007837

航空航天技术航空

Jianlin Ye,Savvas Papaioannou,Panayiotis Kolios.VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation[EB/OL].(2025-05-29)[2025-07-01].https://arxiv.org/abs/2505.23267.点此复制

评论