|国家预印本平台
| 注册
首页|Maybe you don't need a U-Net: convolutional feature upsampling for materials micrograph segmentation

Maybe you don't need a U-Net: convolutional feature upsampling for materials micrograph segmentation

Maybe you don't need a U-Net: convolutional feature upsampling for materials micrograph segmentation

来源:Arxiv_logoArxiv
英文摘要

Feature foundation models - usually vision transformers - offer rich semantic descriptors of images, useful for downstream tasks such as (interactive) segmentation and object detection. For computational efficiency these descriptors are often patch-based, and so struggle to represent the fine features often present in micrographs; they also struggle with the large image sizes present in materials and biological image analysis. In this work, we train a convolutional neural network to upsample low-resolution (i.e, large patch size) foundation model features with reference to the input image. We apply this upsampler network (without any further training) to efficiently featurise and then segment a variety of microscopy images, including plant cells, a lithium-ion battery cathode and organic crystals. The richness of these upsampled features admits separation of hard to segment phases, like hairline cracks. We demonstrate that interactive segmentation with these deep features produces high-quality segmentations far faster and with far fewer labels than training or finetuning a more traditional convolutional network.

Ronan Docherty、Antonis Vamvakeros、Samuel J. Cooper

细胞生物学植物学自动化技术、自动化技术设备计算技术、计算机技术

Ronan Docherty,Antonis Vamvakeros,Samuel J. Cooper.Maybe you don't need a U-Net: convolutional feature upsampling for materials micrograph segmentation[EB/OL].(2025-08-29)[2025-09-09].https://arxiv.org/abs/2508.21529.点此复制

评论