|国家预印本平台
首页|CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning

CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning

CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning

来源:Arxiv_logoArxiv
英文摘要

Multilingual vision-language models have made significant strides in image captioning, yet they still lag behind their English counterparts due to limited multilingual training data and costly large-scale model parameterization. Retrieval-augmented generation (RAG) offers a promising alternative by conditioning caption generation on retrieved examples in the target language, reducing the need for extensive multilingual training. However, multilingual RAG captioning models often depend on retrieved captions translated from English, which can introduce mismatches and linguistic biases relative to the source language. We introduce CONCAP, a multilingual image captioning model that integrates retrieved captions with image-specific concepts, enhancing the contextualization of the input image and grounding the captioning process across different languages. Experiments on the XM3600 dataset indicate that CONCAP enables strong performance on low- and mid-resource languages, with highly reduced data requirements. Our findings highlight the effectiveness of concept-aware retrieval augmentation in bridging multilingual performance gaps.

George Ibrahim、Rita Ramos、Yova Kementchedjhieva

计算技术、计算机技术

George Ibrahim,Rita Ramos,Yova Kementchedjhieva.CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning[EB/OL].(2025-07-27)[2025-08-18].https://arxiv.org/abs/2507.20411.点此复制

评论