Modality Translation and Registration of MR and Ultrasound Images Using Diffusion Models
Modality Translation and Registration of MR and Ultrasound Images Using Diffusion Models
Multimodal MR-US registration is critical for prostate cancer diagnosis. However, this task remains challenging due to significant modality discrepancies. Existing methods often fail to align critical boundaries while being overly sensitive to irrelevant details. To address this, we propose an anatomically coherent modality translation (ACMT) network based on a hierarchical feature disentanglement design. We leverage shallow-layer features for texture consistency and deep-layer features for boundary preservation. Unlike conventional modality translation methods that convert one modality into another, our ACMT introduces the customized design of an intermediate pseudo modality. Both MR and US images are translated toward this intermediate domain, effectively addressing the bottlenecks faced by traditional translation methods in the downstream registration task. Experiments demonstrate that our method mitigates modality-specific discrepancies while preserving crucial anatomical boundaries for accurate registration. Quantitative evaluations show superior modality similarity compared to state-of-the-art modality translation methods. Furthermore, downstream registration experiments confirm that our translated images achieve the best alignment performance, highlighting the robustness of our framework for multi-modal prostate image registration.
Xudong Ma、Nantheera Anantrasirichai、Stefanos Bolomytis、Alin Achim
肿瘤学医学研究方法
Xudong Ma,Nantheera Anantrasirichai,Stefanos Bolomytis,Alin Achim.Modality Translation and Registration of MR and Ultrasound Images Using Diffusion Models[EB/OL].(2025-06-01)[2025-06-23].https://arxiv.org/abs/2506.01025.点此复制
评论