|国家预印本平台
首页|Investigating Redundancy in Multimodal Large Language Models with Multiple Vision Encoders

Investigating Redundancy in Multimodal Large Language Models with Multiple Vision Encoders

Investigating Redundancy in Multimodal Large Language Models with Multiple Vision Encoders

来源:Arxiv_logoArxiv
英文摘要

Multimodal Large Language Models (MLLMs) increasingly adopt multiple vision encoders to capture diverse visual information, ranging from coarse semantics to fine grained details. While this approach is intended to enhance visual understanding capability, we observe that the performance gains from adding encoders often diminish and can even lead to performance degradation, a phenomenon we term encoder redundancy. This paper presents a systematic investigation into this issue. Through comprehensive ablation studies on state of the art multi encoder MLLMs, we empirically demonstrate that significant redundancy exists. To quantify each encoder's unique contribution, we propose a principled metric: the Conditional Utilization Rate (CUR). Building on CUR, we introduce the Information Gap (IG) to capture the overall disparity in encoder utility within a model.Our experiments reveal that certain vision encoders contribute little, or even negatively, to overall performance, confirming substantial redundancy. Our experiments reveal that certain vision encoders contribute minimally, or even negatively, to the model's performance, confirming the prevalence of redundancy. These findings highlight critical inefficiencies in current multi encoder designs and establish that our proposed metrics can serve as valuable diagnostic tools for developing more efficient and effective multimodal architectures.

Song Mao、Yang Chen、Pinglong Cai、Ding Wang、Guohang Yan、Zhi Yu、Botian Shi

计算技术、计算机技术

Song Mao,Yang Chen,Pinglong Cai,Ding Wang,Guohang Yan,Zhi Yu,Botian Shi.Investigating Redundancy in Multimodal Large Language Models with Multiple Vision Encoders[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2507.03262.点此复制

评论