SeeDiff: Off-the-Shelf Seeded Mask Generation from Diffusion Models
SeeDiff: Off-the-Shelf Seeded Mask Generation from Diffusion Models
Entrusted with the goal of pixel-level object classification, the semantic segmentation networks entail the laborious preparation of pixel-level annotation masks. To obtain pixel-level annotation masks for a given class without human efforts, recent few works have proposed to generate pairs of images and annotation masks by employing image and text relationships modeled by text-to-image generative models, especially Stable Diffusion. However, these works do not fully exploit the capability of text-guided Diffusion models and thus require a pre-trained segmentation network, careful text prompt tuning, or the training of a segmentation network to generate final annotation masks. In this work, we take a closer look at attention mechanisms of Stable Diffusion, from which we draw connections with classical seeded segmentation approaches. In particular, we show that cross-attention alone provides very coarse object localization, which however can provide initial seeds. Then, akin to region expansion in seeded segmentation, we utilize the semantic-correspondence-modeling capability of self-attention to iteratively spread the attention to the whole class from the seeds using multi-scale self-attention maps. We also observe that a simple-text-guided synthetic image often has a uniform background, which is easier to find correspondences, compared to complex-structured objects. Thus, we further refine a mask using a more accurate background mask. Our proposed method, dubbed SeeDiff, generates high-quality masks off-the-shelf from Stable Diffusion, without additional training procedure, prompt tuning, or a pre-trained segmentation network.
Joon Hyun Park、Kumju Jo、Sungyong Baik
计算技术、计算机技术
Joon Hyun Park,Kumju Jo,Sungyong Baik.SeeDiff: Off-the-Shelf Seeded Mask Generation from Diffusion Models[EB/OL].(2025-07-26)[2025-08-10].https://arxiv.org/abs/2507.19808.点此复制
评论