SynthFM: Training Modality-agnostic Foundation Models for Medical Image Segmentation without Real Medical Data
SynthFM: Training Modality-agnostic Foundation Models for Medical Image Segmentation without Real Medical Data
Foundation models like the Segment Anything Model (SAM) excel in zero-shot segmentation for natural images but struggle with medical image segmentation due to differences in texture, contrast, and noise. Annotating medical images is costly and requires domain expertise, limiting large-scale annotated data availability. To address this, we propose SynthFM, a synthetic data generation framework that mimics the complexities of medical images, enabling foundation models to adapt without real medical data. Using SAM's pretrained encoder and training the decoder from scratch on SynthFM's dataset, we evaluated our method on 11 anatomical structures across 9 datasets (CT, MRI, and Ultrasound). SynthFM outperformed zero-shot baselines like SAM and MedSAM, achieving superior results under different prompt settings and on out-of-distribution datasets.
Sourya Sengupta、Satrajit Chakrabarty、Keerthi Sravan Ravi、Gopal Avinash、Ravi Soni
医学研究方法基础医学
Sourya Sengupta,Satrajit Chakrabarty,Keerthi Sravan Ravi,Gopal Avinash,Ravi Soni.SynthFM: Training Modality-agnostic Foundation Models for Medical Image Segmentation without Real Medical Data[EB/OL].(2025-04-10)[2025-04-26].https://arxiv.org/abs/2504.08177.点此复制
评论