Unified 3D MRI Representations via Sequence-Invariant Contrastive Learning
Unified 3D MRI Representations via Sequence-Invariant Contrastive Learning
Self-supervised deep learning has accelerated 2D natural image analysis but remains difficult to translate into 3D MRI, where data are scarce and pre-trained 2D backbones cannot capture volumetric context. We present a \emph{sequence-invariant} self-supervised framework leveraging quantitative MRI (qMRI). By simulating multiple MRI contrasts from a single 3D qMRI scan and enforcing consistent representations across these contrasts, we learn anatomy-centric rather than sequence-specific features. The result is a single 3D encoder that excels across tasks and protocols. Experiments on healthy brain segmentation (IXI), stroke lesion segmentation (ARC), and MRI denoising show significant gains over baseline SSL approaches, especially in low-data settings (up to +8.3\% Dice, +4.2 dB PSNR). It also generalises to unseen sites, supporting scalable clinical use. Code and trained models are publicly available at https://github.com/liamchalcroft/contrast-squared
Liam Chalcroft、Jenny Crinion、Cathy J. Price、John Ashburner
医学研究方法医学现状、医学发展
Liam Chalcroft,Jenny Crinion,Cathy J. Price,John Ashburner.Unified 3D MRI Representations via Sequence-Invariant Contrastive Learning[EB/OL].(2025-07-29)[2025-08-23].https://arxiv.org/abs/2501.12057.点此复制
评论