|国家预印本平台
首页|Enhancing Features in Long-tailed Data Using Large Vision Model

Enhancing Features in Long-tailed Data Using Large Vision Model

Enhancing Features in Long-tailed Data Using Large Vision Model

来源:Arxiv_logoArxiv
英文摘要

Language-based foundation models, such as large language models (LLMs) or large vision-language models (LVLMs), have been widely studied in long-tailed recognition. However, the need for linguistic data is not applicable to all practical tasks. In this study, we aim to explore using large vision models (LVMs) or visual foundation models (VFMs) to enhance long-tailed data features without any language information. Specifically, we extract features from the LVM and fuse them with features in the baseline network's map and latent space to obtain the augmented features. Moreover, we design several prototype-based losses in the latent space to further exploit the potential of the augmented features. In the experimental section, we validate our approach on two benchmark datasets: ImageNet-LT and iNaturalist2018.

Pengxiao Han、Changkun Ye、Jinguang Tong、Cuicui Jiang、Jie Hong、Li Fang、Xuesong Li

计算技术、计算机技术

Pengxiao Han,Changkun Ye,Jinguang Tong,Cuicui Jiang,Jie Hong,Li Fang,Xuesong Li.Enhancing Features in Long-tailed Data Using Large Vision Model[EB/OL].(2025-04-15)[2025-05-12].https://arxiv.org/abs/2504.10852.点此复制

评论