|国家预印本平台
首页|LLM-enhanced Action-aware Multi-modal Prompt Tuning for Image-Text Matching

LLM-enhanced Action-aware Multi-modal Prompt Tuning for Image-Text Matching

LLM-enhanced Action-aware Multi-modal Prompt Tuning for Image-Text Matching

来源:Arxiv_logoArxiv
英文摘要

Driven by large-scale contrastive vision-language pre-trained models such as CLIP, recent advancements in the image-text matching task have achieved remarkable success in representation learning. Due to image-level visual-language alignment, CLIP falls short in understanding fine-grained details such as object attributes and spatial relationships between objects. Recent efforts have attempted to compel CLIP to acquire structured visual representations by introducing prompt learning to achieve object-level alignment. While achieving promising results, they still lack the capability to perceive actions, which are crucial for describing the states or relationships between objects. Therefore, we propose to endow CLIP with fine-grained action-level understanding by introducing an LLM-enhanced action-aware multi-modal prompt-tuning method, incorporating the action-related external knowledge generated by large language models (LLMs). Specifically, we design an action triplet prompt and an action state prompt to exploit compositional semantic knowledge and state-related causal knowledge implicitly stored in LLMs. Subsequently, we propose an adaptive interaction module to aggregate attentive visual features conditioned on action-aware prompted knowledge for establishing discriminative and action-aware visual representations, which further improves the performance. Comprehensive experimental results on two benchmark datasets demonstrate the effectiveness of our method.

Mengxiao Tian、Xinxiao Wu、Shuo Yang

计算技术、计算机技术

Mengxiao Tian,Xinxiao Wu,Shuo Yang.LLM-enhanced Action-aware Multi-modal Prompt Tuning for Image-Text Matching[EB/OL].(2025-07-12)[2025-07-21].https://arxiv.org/abs/2506.23502.点此复制

评论