|国家预印本平台
首页|Upsampling DINOv2 features for unsupervised vision tasks and weakly supervised materials segmentation

Upsampling DINOv2 features for unsupervised vision tasks and weakly supervised materials segmentation

Upsampling DINOv2 features for unsupervised vision tasks and weakly supervised materials segmentation

来源:Arxiv_logoArxiv
英文摘要

The features of self-supervised vision transformers (ViTs) contain strong semantic and positional information relevant to downstream tasks like object localization and segmentation. Recent works combine these features with traditional methods like clustering, graph partitioning or region correlations to achieve impressive baselines without finetuning or training additional networks. We leverage upsampled features from ViT networks (e.g DINOv2) in two workflows: in a clustering based approach for object localization and segmentation, and paired with standard classifiers in weakly supervised materials segmentation. Both show strong performance on benchmarks, especially in weakly supervised segmentation where the ViT features capture complex relationships inaccessible to classical approaches. We expect the flexibility and generalizability of these features will both speed up and strengthen materials characterization, from segmentation to property-prediction.

Antonis Vamvakeros、Samuel J. Cooper、Ronan Docherty

计算技术、计算机技术

Antonis Vamvakeros,Samuel J. Cooper,Ronan Docherty.Upsampling DINOv2 features for unsupervised vision tasks and weakly supervised materials segmentation[EB/OL].(2025-08-06)[2025-08-24].https://arxiv.org/abs/2410.19836.点此复制

评论