An analysis of vision-language models for fabric retrieval
An analysis of vision-language models for fabric retrieval
Effective cross-modal retrieval is essential for applications like information retrieval and recommendation systems, particularly in specialized domains such as manufacturing, where product information often consists of visual samples paired with a textual description. This paper investigates the use of Vision Language Models(VLMs) for zero-shot text-to-image retrieval on fabric samples. We address the lack of publicly available datasets by introducing an automated annotation pipeline that uses Multimodal Large Language Models (MLLMs) to generate two types of textual descriptions: freeform natural language and structured attribute-based descriptions. We produce these descriptions to evaluate retrieval performance across three Vision-Language Models: CLIP, LAION-CLIP, and Meta's Perception Encoder. Our experiments demonstrate that structured, attribute-rich descriptions significantly enhance retrieval accuracy, particularly for visually complex fabric classes, with the Perception Encoder outperforming other models due to its robust feature alignment capabilities. However, zero-shot retrieval remains challenging in this fine-grained domain, underscoring the need for domain-adapted approaches. Our findings highlight the importance of combining technical textual descriptions with advanced VLMs to optimize cross-modal retrieval in industrial applications.
Francesco Giuliari、Asif Khan Pattan、Mohamed Lamine Mekhalfi、Fabio Poiesi
纺织工业、染整工业自动化技术、自动化技术设备计算技术、计算机技术
Francesco Giuliari,Asif Khan Pattan,Mohamed Lamine Mekhalfi,Fabio Poiesi.An analysis of vision-language models for fabric retrieval[EB/OL].(2025-07-07)[2025-07-16].https://arxiv.org/abs/2507.04735.点此复制
评论