|国家预印本平台
首页|DriveBLIP2: Attention-Guided Explanation Generation for Complex Driving Scenarios

DriveBLIP2: Attention-Guided Explanation Generation for Complex Driving Scenarios

DriveBLIP2: Attention-Guided Explanation Generation for Complex Driving Scenarios

来源:Arxiv_logoArxiv
英文摘要

This paper introduces a new framework, DriveBLIP2, built upon the BLIP2-OPT architecture, to generate accurate and contextually relevant explanations for emerging driving scenarios. While existing vision-language models perform well in general tasks, they encounter difficulties in understanding complex, multi-object environments, particularly in real-time applications such as autonomous driving, where the rapid identification of key objects is crucial. To address this limitation, an Attention Map Generator is proposed to highlight significant objects relevant to driving decisions within critical video frames. By directing the model's focus to these key regions, the generated attention map helps produce clear and relevant explanations, enabling drivers to better understand the vehicle's decision-making process in critical situations. Evaluations on the DRAMA dataset reveal significant improvements in explanation quality, as indicated by higher BLEU, ROUGE, CIDEr, and SPICE scores compared to baseline models. These findings underscore the potential of targeted attention mechanisms in vision-language models for enhancing explainability in real-time autonomous driving.

Shihong Ling、Yue Wan、Xiaowei Jia、Na Du

交通运输经济

Shihong Ling,Yue Wan,Xiaowei Jia,Na Du.DriveBLIP2: Attention-Guided Explanation Generation for Complex Driving Scenarios[EB/OL].(2025-06-25)[2025-07-16].https://arxiv.org/abs/2506.22494.点此复制

评论