Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder
Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder
Camera traps are revolutionising wildlife monitoring by capturing vast amounts of visual data; however, the manual identification of individual animals remains a significant bottleneck. This study introduces a fully self-supervised approach to learning robust chimpanzee face embeddings from unlabeled camera-trap footage. Leveraging the DINOv2 framework, we train Vision Transformers on automatically mined face crops, eliminating the need for identity labels. Our method demonstrates strong open-set re-identification performance, surpassing supervised baselines on challenging benchmarks such as Bossou, despite utilising no labelled data during training. This work underscores the potential of self-supervised learning in biodiversity monitoring and paves the way for scalable, non-invasive population studies.
Vladimir Iashin、Horace Lee、Dan Schofield、Andrew Zisserman
生物科学研究方法、生物科学研究技术计算技术、计算机技术
Vladimir Iashin,Horace Lee,Dan Schofield,Andrew Zisserman.Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder[EB/OL].(2025-07-14)[2025-07-22].https://arxiv.org/abs/2507.10552.点此复制
评论