|国家预印本平台
首页|VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection

VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection

VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection

来源:Arxiv_logoArxiv
英文摘要

Although fusing multiple sensor modalities can enhance object detection performance, existing fusion approaches often overlook subtle variations in environmental conditions and sensor inputs. As a result, they struggle to adaptively weight each modality under such variations. To address this challenge, we introduce Vision-Language Conditioned Fusion (VLC Fusion), a novel fusion framework that leverages a Vision-Language Model (VLM) to condition the fusion process on nuanced environmental cues. By capturing high-level environmental context such as as darkness, rain, and camera blurring, the VLM guides the model to dynamically adjust modality weights based on the current scene. We evaluate VLC Fusion on real-world autonomous driving and military target detection datasets that include image, LIDAR, and mid-wave infrared modalities. Our experiments show that VLC Fusion consistently outperforms conventional fusion baselines, achieving improved detection accuracy in both seen and unseen scenarios.

Aditya Taparia、Noel Ngu、Mario Leiva、Joshua Shay Kricheli、John Corcoran、Nathaniel D. Bastian、Gerardo Simari、Paulo Shakarian、Ransalu Senanayake

军事技术自动化技术、自动化技术设备

Aditya Taparia,Noel Ngu,Mario Leiva,Joshua Shay Kricheli,John Corcoran,Nathaniel D. Bastian,Gerardo Simari,Paulo Shakarian,Ransalu Senanayake.VLC Fusion: Vision-Language Conditioned Sensor Fusion for Robust Object Detection[EB/OL].(2025-05-19)[2025-06-09].https://arxiv.org/abs/2505.12715.点此复制

评论