|国家预印本平台
首页|LLaVA-RE: Binary Image-Text Relevancy Evaluation with Multimodal Large Language Model

LLaVA-RE: Binary Image-Text Relevancy Evaluation with Multimodal Large Language Model

LLaVA-RE: Binary Image-Text Relevancy Evaluation with Multimodal Large Language Model

来源:Arxiv_logoArxiv
英文摘要

Multimodal generative AI usually involves generating image or text responses given inputs in another modality. The evaluation of image-text relevancy is essential for measuring response quality or ranking candidate responses. In particular, binary relevancy evaluation, i.e., ``Relevant'' vs. ``Not Relevant'', is a fundamental problem. However, this is a challenging task considering that texts have diverse formats and the definition of relevancy varies in different scenarios. We find that Multimodal Large Language Models (MLLMs) are an ideal choice to build such evaluators, as they can flexibly handle complex text formats and take in additional task information. In this paper, we present LLaVA-RE, a first attempt for binary image-text relevancy evaluation with MLLM. It follows the LLaVA architecture and adopts detailed task instructions and multimodal in-context samples. In addition, we propose a novel binary relevancy data set that covers various tasks. Experimental results validate the effectiveness of our framework.

Tao Sun、Oliver Liu、JinJin Li、Lan Ma

计算技术、计算机技术

Tao Sun,Oliver Liu,JinJin Li,Lan Ma.LLaVA-RE: Binary Image-Text Relevancy Evaluation with Multimodal Large Language Model[EB/OL].(2025-08-07)[2025-08-18].https://arxiv.org/abs/2508.05602.点此复制

评论