|国家预印本平台
首页|Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training

Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training

Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training

来源:Arxiv_logoArxiv
英文摘要

Fine-grained visual classification aims to recognize objects belonging to multiple subordinate categories within a super-category. However, this remains a challenging problem, as appearance information alone is often insufficient to accurately differentiate between fine-grained visual categories. To address this, we propose a novel and unified framework that leverages meta-information to assist fine-grained identification. We tackle the joint learning of visual and meta-information through cross-contrastive pre-training. In the first stage, we employ three encoders for images, text, and meta-information, aligning their projected embeddings to achieve better representations. We then fine-tune the image and meta-information encoders for the classification task. Experiments on the NABirds dataset demonstrate that our framework effectively utilizes meta-information to enhance fine-grained recognition performance. With the addition of meta-information, our framework surpasses the current baseline on NABirds by 7.83%. Furthermore, it achieves an accuracy of 84.44% on the NABirds dataset, outperforming many existing state-of-the-art approaches that utilize meta-information.

Sumit Mamtani、Yash Thesia

计算技术、计算机技术

Sumit Mamtani,Yash Thesia.Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training[EB/OL].(2025-04-28)[2025-06-06].https://arxiv.org/abs/2504.20322.点此复制

评论