Towards a Universal 3D Medical Multi-modality Generalization via Learning Personalized Invariant Representation
Towards a Universal 3D Medical Multi-modality Generalization via Learning Personalized Invariant Representation
Variations in medical imaging modalities and individual anatomical differences pose challenges to cross-modality generalization in multi-modal tasks. Existing methods often concentrate exclusively on common anatomical patterns, thereby neglecting individual differences and consequently limiting their generalization performance. This paper emphasizes the critical role of learning individual-level invariance, i.e., personalized representation $\mathbb{X}_h$, to enhance multi-modality generalization under both homogeneous and heterogeneous settings. It reveals that mappings from individual biological profile to different medical modalities remain static across the population, which is implied in the personalization process. We propose a two-stage approach: pre-training with invariant representation $\mathbb{X}_h$ for personalization, then fine-tuning for diverse downstream tasks. We provide both theoretical and empirical evidence demonstrating the feasibility and advantages of personalization, showing that our approach yields greater generalizability and transferability across diverse multi-modal medical tasks compared to methods lacking personalization. Extensive experiments further validate that our approach significantly enhances performance in various generalization scenarios.
Tan Pan、Xi Yang、Tianyi Liu、Qiufeng Wang、Anh Nguyen、Chen Jiang、Xin Guo、Zhaorui Tan、Yuan Cheng、Yuan Qi、Kaizhu Huang
医药卫生理论医学研究方法
Tan Pan,Xi Yang,Tianyi Liu,Qiufeng Wang,Anh Nguyen,Chen Jiang,Xin Guo,Zhaorui Tan,Yuan Cheng,Yuan Qi,Kaizhu Huang.Towards a Universal 3D Medical Multi-modality Generalization via Learning Personalized Invariant Representation[EB/OL].(2025-07-24)[2025-08-16].https://arxiv.org/abs/2411.06106.点此复制
评论