|国家预印本平台
首页|SeMask: Semantically Masked Transformers for Semantic Segmentation

SeMask: Semantically Masked Transformers for Semantic Segmentation

SeMask: Semantically Masked Transformers for Semantic Segmentation

来源:Arxiv_logoArxiv
英文摘要

Finetuning a pretrained backbone in the encoder part of an image transformer network has been the traditional approach for the semantic segmentation task. However, such an approach leaves out the semantic context that an image provides during the encoding stage. This paper argues that incorporating semantic information of the image into pretrained hierarchical transformer-based backbones while finetuning improves the performance considerably. To achieve this, we propose SeMask, a simple and effective framework that incorporates semantic information into the encoder with the help of a semantic attention operation. In addition, we use a lightweight semantic decoder during training to provide supervision to the intermediate semantic prior maps at every stage. Our experiments demonstrate that incorporating semantic priors enhances the performance of the established hierarchical encoders with a slight increase in the number of FLOPs. We provide empirical proof by integrating SeMask into Swin Transformer and Mix Transformer backbones as our encoder paired with different decoders. Our framework achieves a new state-of-the-art of 58.25% mIoU on the ADE20K dataset and improvements of over 3% in the mIoU metric on the Cityscapes dataset. The code and checkpoints are publicly available at https://github.com/Picsart-AI-Research/SeMask-Segmentation .

Nikita Orlov、Humphrey Shi、Jitesh Jain、Steven Walton、Jiachen Li、Zilong Huang、Anukriti Singh

计算技术、计算机技术

Nikita Orlov,Humphrey Shi,Jitesh Jain,Steven Walton,Jiachen Li,Zilong Huang,Anukriti Singh.SeMask: Semantically Masked Transformers for Semantic Segmentation[EB/OL].(2021-12-23)[2025-08-05].https://arxiv.org/abs/2112.12782.点此复制

评论