|国家预印本平台
首页|Object-level Self-Distillation for Vision Pretraining

Object-level Self-Distillation for Vision Pretraining

Object-level Self-Distillation for Vision Pretraining

来源:Arxiv_logoArxiv
英文摘要

State-of-the-art vision pretraining methods rely on image-level self-distillation from object-centric datasets such as ImageNet, implicitly assuming each image contains a single object. This assumption does not always hold: many ImageNet images already contain multiple objects. Further, it limits scalability to scene-centric datasets that better mirror real-world complexity. We address these challenges by introducing Object-level Self-DIStillation (ODIS), a pretraining approach that shifts the self-distillation granularity from whole images to individual objects. Using object-aware cropping and masked attention, ODIS isolates object-specific regions, guiding the transformer toward semantically meaningful content and transforming a noisy, scene-level task into simpler object-level sub-tasks. We show that this approach improves visual representations both at the image and patch levels. Using masks at inference time, our method achieves an impressive $82.6\%$ $k$-NN accuracy on ImageNet1k with ViT-Large.

?a?lar H?zl?、?a?atay Y?ld?z、Pekka Marttinen

计算技术、计算机技术

?a?lar H?zl?,?a?atay Y?ld?z,Pekka Marttinen.Object-level Self-Distillation for Vision Pretraining[EB/OL].(2025-06-04)[2025-06-21].https://arxiv.org/abs/2506.05409.点此复制

评论