How to Train your Text-to-Image Model: Evaluating Design Choices for Synthetic Training Captions
How to Train your Text-to-Image Model: Evaluating Design Choices for Synthetic Training Captions
Training data is at the core of any successful text-to-image models. The quality and descriptiveness of image text are crucial to a model's performance. Given the noisiness and inconsistency in web-scraped datasets, recent works shifted towards synthetic training captions. While this setup is generally believed to produce more capable models, current literature does not provide any insights into its design choices. This study closes this gap by systematically investigating how different synthetic captioning strategies impact the downstream performance of text-to-image models. Our experiments demonstrate that dense, high-quality captions enhance text alignment but may introduce trade-offs in output aesthetics and diversity. Conversely, captions of randomized lengths yield balanced improvements across aesthetics and alignment without compromising sample diversity. We also demonstrate that varying caption distributions introduce significant shifts in the output bias of a trained model. Our findings underscore the importance of caption design in achieving optimal model performance and provide practical insights for more effective training data strategies in text-to-image generation.
Manuel Brack、Sudeep Katakol、Felix Friedrich、Patrick Schramowski、Hareesh Ravi、Kristian Kersting、Ajinkya Kale
计算技术、计算机技术
Manuel Brack,Sudeep Katakol,Felix Friedrich,Patrick Schramowski,Hareesh Ravi,Kristian Kersting,Ajinkya Kale.How to Train your Text-to-Image Model: Evaluating Design Choices for Synthetic Training Captions[EB/OL].(2025-06-20)[2025-06-30].https://arxiv.org/abs/2506.16679.点此复制
评论