Learn2Synth: Learning Optimal Data Synthesis using Hypergradients for Brain Image Segmentation
Learn2Synth: Learning Optimal Data Synthesis using Hypergradients for Brain Image Segmentation
Domain randomization through synthesis is a powerful strategy to train networks that are unbiased with respect to the domain of the input images. Randomization allows networks to see a virtually infinite range of intensities and artifacts during training, thereby minimizing overfitting to appearance and maximizing generalization to unseen data. Although powerful, this approach relies on the accurate tuning of a large set of hyperparameters that govern the probabilistic distribution of the synthesized images. Instead of manually tuning these parameters, we introduce Learn2Synth, a novel procedure in which synthesis parameters are learned using a small set of real labeled data. Unlike methods that impose constraints to align synthetic data with real data (e.g., contrastive or adversarial techniques), which risk misaligning the image and its label map, we tune an augmentation engine such that a segmentation network trained on synthetic data has optimal accuracy when applied to real data. This approach allows the training procedure to benefit from real labeled examples, without ever using these real examples to train the segmentation network, which avoids biasing the network towards the properties of the training set. Specifically, we develop parametric and nonparametric strategies to enhance synthetic images in a way that improves the performance of the segmentation network. We demonstrate the effectiveness of this learning strategy on synthetic and real-world brain scans. Code is available at: https://github.com/HuXiaoling/Learn2Synth.
Xiaoling Hu、Xiangrui Zeng、Oula Puonti、Juan Eugenio Iglesias、Bruce Fischl、Yael Balbastre
计算技术、计算机技术
Xiaoling Hu,Xiangrui Zeng,Oula Puonti,Juan Eugenio Iglesias,Bruce Fischl,Yael Balbastre.Learn2Synth: Learning Optimal Data Synthesis using Hypergradients for Brain Image Segmentation[EB/OL].(2024-11-22)[2025-06-29].https://arxiv.org/abs/2411.16719.点此复制
评论