|国家预印本平台
首页|SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving

SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving

SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving

来源:Arxiv_logoArxiv
英文摘要

The integration of Vision-Language Models (VLMs) into autonomous driving systems has shown promise in addressing key challenges such as learning complexity, interpretability, and common-sense reasoning. However, existing approaches often struggle with efficient integration and realtime decision-making due to computational demands. In this paper, we introduce SOLVE, an innovative framework that synergizes VLMs with end-to-end (E2E) models to enhance autonomous vehicle planning. Our approach emphasizes knowledge sharing at the feature level through a shared visual encoder, enabling comprehensive interaction between VLM and E2E components. We propose a Trajectory Chain-of-Thought (T-CoT) paradigm, which progressively refines trajectory predictions, reducing uncertainty and improving accuracy. By employing a temporal decoupling strategy, SOLVE achieves efficient cooperation by aligning high-quality VLM outputs with E2E real-time performance. Evaluated on the nuScenes dataset, our method demonstrates significant improvements in trajectory prediction accuracy, paving the way for more robust and reliable autonomous driving systems.

Xuesong Chen、Linjiang Huang、Tao Ma、Rongyao Fang、Shaoshuai Shi、Hongsheng Li

自动化技术、自动化技术设备计算技术、计算机技术

Xuesong Chen,Linjiang Huang,Tao Ma,Rongyao Fang,Shaoshuai Shi,Hongsheng Li.SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving[EB/OL].(2025-05-22)[2025-07-16].https://arxiv.org/abs/2505.16805.点此复制

评论