Revisiting Continuity of Image Tokens for Cross-domain Few-shot Learning
Revisiting Continuity of Image Tokens for Cross-domain Few-shot Learning
Vision Transformer (ViT) has achieved remarkable success due to its large-scale pretraining on general domains, but it still faces challenges when applying it to downstream distant domains that have only scarce training data, which gives rise to the Cross-Domain Few-Shot Learning (CDFSL) task. Inspired by Self-Attention's insensitivity to token orders, we find an interesting phenomenon neglected in current works: disrupting the continuity of image tokens (i.e., making pixels not smoothly transited across patches) in ViT leads to a noticeable performance decline in the general (source) domain but only a marginal decrease in downstream target domains. This questions the role of image tokens' continuity in ViT's generalization under large domain gaps. In this paper, we delve into this phenomenon for an interpretation. We find continuity aids ViT in learning larger spatial patterns, which are harder to transfer than smaller ones, enlarging domain distances. Meanwhile, it implies that only smaller patterns within each patch could be transferred under extreme domain gaps. Based on this interpretation, we further propose a simple yet effective method for CDFSL that better disrupts the continuity of image tokens, encouraging the model to rely less on large patterns and more on smaller ones. Extensive experiments show the effectiveness of our method in reducing domain gaps and outperforming state-of-the-art works. Codes and models are available at https://github.com/shuaiyi308/ReCIT.
Shuai Yi、Yixiong Zou、Yuhua Li、Ruixuan Li
计算技术、计算机技术
Shuai Yi,Yixiong Zou,Yuhua Li,Ruixuan Li.Revisiting Continuity of Image Tokens for Cross-domain Few-shot Learning[EB/OL].(2025-06-03)[2025-07-21].https://arxiv.org/abs/2506.03110.点此复制
评论