Can Vision Transformers with ResNet's Global Features Fairly Authenticate Demographic Faces?
Can Vision Transformers with ResNet's Global Features Fairly Authenticate Demographic Faces?
Biometric face authentication is crucial in computer vision, but ensuring fairness and generalization across demographic groups remains a big challenge. Therefore, we investigated whether Vision Transformer (ViT) and ResNet, leveraging pre-trained global features, can fairly authenticate different demographic faces while relying minimally on local features. In this investigation, we used three pre-trained state-of-the-art (SOTA) ViT foundation models from Facebook, Google, and Microsoft for global features as well as ResNet-18. We concatenated the features from ViT and ResNet, passed them through two fully connected layers, and trained on customized face image datasets to capture the local features. Then, we designed a novel few-shot prototype network with backbone features embedding. We also developed new demographic face image support and query datasets for this empirical study. The network's testing was conducted on this dataset in one-shot, three-shot, and five-shot scenarios to assess how performance improves as the size of the support set increases. We observed results across datasets with varying races/ethnicities, genders, and age groups. The Microsoft Swin Transformer backbone performed better among the three SOTA ViT for this task. The code and data are available at: https://github.com/Sufianlab/FairVitBio.
Abu Sufian、Marco Leo、Cosimo Distante、Anirudha Ghosh、Debaditya Barman
计算技术、计算机技术
Abu Sufian,Marco Leo,Cosimo Distante,Anirudha Ghosh,Debaditya Barman.Can Vision Transformers with ResNet's Global Features Fairly Authenticate Demographic Faces?[EB/OL].(2025-06-03)[2025-07-16].https://arxiv.org/abs/2506.05383.点此复制
评论