|国家预印本平台
首页|CoMemo: LVLMs Need Image Context with Image Memory

CoMemo: LVLMs Need Image Context with Image Memory

CoMemo: LVLMs Need Image Context with Image Memory

来源:Arxiv_logoArxiv
英文摘要

Recent advancements in Large Vision-Language Models built upon Large Language Models have established aligning visual features with LLM representations as the dominant paradigm. However, inherited LLM architectural designs introduce suboptimal characteristics for multimodal processing. First, LVLMs exhibit a bimodal distribution in attention allocation, leading to the progressive neglect of middle visual content as context expands. Second, conventional positional encoding schemes fail to preserve vital 2D structural relationships when processing dynamic high-resolution images. To address these limitations, we propose CoMemo - a dual-path architecture that combines a Context image path with an image Memory path for visual processing, effectively alleviating visual information neglect. Additionally, we introduce RoPE-DHR, a novel positional encoding mechanism that employs thumbnail-based positional aggregation to maintain 2D spatial awareness while mitigating remote decay in extended sequences. Evaluations across seven benchmarks,including long-context comprehension, multi-image reasoning, and visual question answering, demonstrate CoMemo's superior performance compared to conventional LVLM architectures. Project page is available at https://lalbj.github.io/projects/CoMemo/.

Shi Liu、Weijie Su、Xizhou Zhu、Wenhai Wang、Jifeng Dai

计算技术、计算机技术

Shi Liu,Weijie Su,Xizhou Zhu,Wenhai Wang,Jifeng Dai.CoMemo: LVLMs Need Image Context with Image Memory[EB/OL].(2025-06-06)[2025-06-23].https://arxiv.org/abs/2506.06279.点此复制

评论