Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration
Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration
Cone-Beam Computed Tomography (CBCT) is widely used for intraoperative imaging due to its rapid acquisition and low radiation dose. However, CBCT images typically suffer from artifacts and lower visual quality compared to conventional Computed Tomography (CT). A promising solution is synthetic CT (sCT) generation, where CBCT volumes are translated into the CT domain. In this work, we enhance sCT generation through multimodal learning by jointly leveraging intraoperative CBCT and preoperative CT data. To overcome the inherent misalignment between modalities, we introduce an end-to-end learnable registration module within the sCT pipeline. This model is evaluated on a controlled synthetic dataset, allowing precise manipulation of data quality and alignment parameters. Further, we validate its robustness and generalizability on two real-world clinical datasets. Experimental results demonstrate that integrating registration in multimodal sCT generation improves sCT quality, outperforming baseline multimodal methods in 79 out of 90 evaluation settings. Notably, the improvement is most significant in cases where CBCT quality is low and the preoperative CT is moderately misaligned.
Maximilian Tschuchnig、Lukas Lamminger、Philipp Steininger、Michael Gadermayr
医学现状、医学发展医学研究方法
Maximilian Tschuchnig,Lukas Lamminger,Philipp Steininger,Michael Gadermayr.Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration[EB/OL].(2025-07-08)[2025-07-18].https://arxiv.org/abs/2507.06067.点此复制
评论