|国家预印本平台
首页|Dual-Granularity Cross-Modal Identity Association for Weakly-Supervised Text-to-Person Image Matching

Dual-Granularity Cross-Modal Identity Association for Weakly-Supervised Text-to-Person Image Matching

Dual-Granularity Cross-Modal Identity Association for Weakly-Supervised Text-to-Person Image Matching

来源:Arxiv_logoArxiv
英文摘要

Weakly supervised text-to-person image matching, as a crucial approach to reducing models' reliance on large-scale manually labeled samples, holds significant research value. However, existing methods struggle to predict complex one-to-many identity relationships, severely limiting performance improvements. To address this challenge, we propose a local-and-global dual-granularity identity association mechanism. Specifically, at the local level, we explicitly establish cross-modal identity relationships within a batch, reinforcing identity constraints across different modalities and enabling the model to better capture subtle differences and correlations. At the global level, we construct a dynamic cross-modal identity association network with the visual modality as the anchor and introduce a confidence-based dynamic adjustment mechanism, effectively enhancing the model's ability to identify weakly associated samples while improving overall sensitivity. Additionally, we propose an information-asymmetric sample pair construction method combined with consistency learning to tackle hard sample mining and enhance model robustness. Experimental results demonstrate that the proposed method substantially boosts cross-modal matching accuracy, providing an efficient and practical solution for text-to-person image matching.

Yafei Zhang、Yongle Shang、Huafeng Li

计算技术、计算机技术

Yafei Zhang,Yongle Shang,Huafeng Li.Dual-Granularity Cross-Modal Identity Association for Weakly-Supervised Text-to-Person Image Matching[EB/OL].(2025-07-09)[2025-07-16].https://arxiv.org/abs/2507.06744.点此复制

评论