|国家预印本平台
首页|Partial CLIP is Enough: Chimera-Seg for Zero-shot Semantic Segmentation

Partial CLIP is Enough: Chimera-Seg for Zero-shot Semantic Segmentation

Partial CLIP is Enough: Chimera-Seg for Zero-shot Semantic Segmentation

来源:Arxiv_logoArxiv
英文摘要

Zero-shot Semantic Segmentation (ZSS) aims to segment both seen and unseen classes using supervision from only seen classes. Beyond adaptation-based methods, distillation-based approaches transfer vision-language alignment of vision-language model, e.g., CLIP, to segmentation models. However, such knowledge transfer remains challenging due to: (1) the difficulty of aligning vision-based features with the textual space, which requires combining spatial precision with vision-language alignment; and (2) the semantic gap between CLIP's global representations and the local, fine-grained features of segmentation models. To address challenge (1), we propose Chimera-Seg, which integrates a segmentation backbone as the body and a CLIP-based semantic head as the head, like the Chimera in Greek mythology, combining spatial precision with vision-language alignment. Specifically, Chimera-Seg comprises a trainable segmentation model and a CLIP Semantic Head (CSH), which maps dense features into the CLIP-aligned space. The CSH incorporates a frozen subnetwork and fixed projection layers from the CLIP visual encoder, along with lightweight trainable components. The partial module from CLIP visual encoder, paired with the segmentation model, retains segmentation capability while easing the mapping to CLIP's semantic space. To address challenge (2), we propose Selective Global Distillation (SGD), which distills knowledge from dense features exhibiting high similarity to the CLIP CLS token, while gradually reducing the number of features used for alignment as training progresses. Besides, we also use a Semantic Alignment Module (SAM) to further align dense visual features with semantic embeddings extracted from the frozen CLIP text encoder. Experiments on two benchmarks show improvements of 0.9% and 1.2% in hIoU.

Jialei Chen、Xu Zheng、Danda Pani Paudel、Luc Van Gool、Hiroshi Murase、Daisuke Deguchi

计算技术、计算机技术

Jialei Chen,Xu Zheng,Danda Pani Paudel,Luc Van Gool,Hiroshi Murase,Daisuke Deguchi.Partial CLIP is Enough: Chimera-Seg for Zero-shot Semantic Segmentation[EB/OL].(2025-06-27)[2025-07-23].https://arxiv.org/abs/2506.22032.点此复制

评论