|国家预印本平台
首页|Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation

Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation

Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation

来源:Arxiv_logoArxiv
英文摘要

Medical image analysis is critical yet challenged by the need of jointly segmenting organs or tissues, and numerous instances for anatomical structures and tumor microenvironment analysis. Existing studies typically formulated different segmentation tasks in isolation, which overlooks the fundamental interdependencies between these tasks, leading to suboptimal segmentation performance and insufficient medical image understanding. To address this issue, we propose a Co-Seg++ framework for versatile medical segmentation. Specifically, we introduce a novel co-segmentation paradigm, allowing semantic and instance segmentation tasks to mutually enhance each other. We first devise a spatio-temporal prompt encoder (STP-Encoder) to capture long-range spatial and temporal relationships between segmentation regions and image embeddings as prior spatial constraints. Moreover, we devise a multi-task collaborative decoder (MTC-Decoder) that leverages cross-guidance to strengthen the contextual consistency of both tasks, jointly computing semantic and instance segmentation masks. Extensive experiments on diverse CT and histopathology datasets demonstrate that the proposed Co-Seg++ outperforms state-of-the-arts in the semantic, instance, and panoptic segmentation of dental anatomical structures, histopathology tissues, and nuclei instances. The source code is available at https://github.com/xq141839/Co-Seg-Plus.

Qing Xu、Yuxiang Luo、Wenting Duan、Zhen Chen

医学研究方法医学现状、医学发展

Qing Xu,Yuxiang Luo,Wenting Duan,Zhen Chen.Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation[EB/OL].(2025-06-20)[2025-07-20].https://arxiv.org/abs/2506.17159.点此复制

评论