|国家预印本平台
首页|Transitive Vision-Language Prompt Learning for Domain Generalization

Transitive Vision-Language Prompt Learning for Domain Generalization

Transitive Vision-Language Prompt Learning for Domain Generalization

来源:Arxiv_logoArxiv
英文摘要

The vision-language pre-training has enabled deep models to make a huge step forward in generalizing across unseen domains. The recent learning method based on the vision-language pre-training model is a great tool for domain generalization and can solve this problem to a large extent. However, there are still some issues that an advancement still suffers from trading-off between domain invariance and class separability, which are crucial in current DG problems. However, there are still some issues that an advancement still suffers from trading-off between domain invariance and class separability, which are crucial in current DG problems. In this paper, we introduce a novel prompt learning strategy that leverages deep vision prompts to address domain invariance while utilizing language prompts to ensure class separability, coupled with adaptive weighting mechanisms to balance domain invariance and class separability. Extensive experiments demonstrate that deep vision prompts effectively extract domain-invariant features, significantly improving the generalization ability of deep models and achieving state-of-the-art performance on three datasets.

Yang Lu、Zhen Chen、Hanzi Wang、Mengke Li、Jinlin Wu、Yan Jin、Liyuan Wang

计算技术、计算机技术

Yang Lu,Zhen Chen,Hanzi Wang,Mengke Li,Jinlin Wu,Yan Jin,Liyuan Wang.Transitive Vision-Language Prompt Learning for Domain Generalization[EB/OL].(2024-04-29)[2025-05-17].https://arxiv.org/abs/2404.18758.点此复制

评论