Structured Captions Improve Prompt Adherence in Text-to-Image Models (Re-LAION-Caption 19M)
Structured Captions Improve Prompt Adherence in Text-to-Image Models (Re-LAION-Caption 19M)
We argue that generative text-to-image models often struggle with prompt adherence due to the noisy and unstructured nature of large-scale datasets like LAION-5B. This forces users to rely heavily on prompt engineering to elicit desirable outputs. In this work, we propose that enforcing a consistent caption structure during training can significantly improve model controllability and alignment. We introduce Re-LAION-Caption 19M, a high-quality subset of Re-LAION-5B, comprising 19 million 1024x1024 images with captions generated by a Mistral 7B Instruct-based LLaVA-Next model. Each caption follows a four-part template: subject, setting, aesthetics, and camera details. We fine-tune PixArt-$Σ$ and Stable Diffusion 2 using both structured and randomly shuffled captions, and show that structured versions consistently yield higher text-image alignment scores using visual question answering (VQA) models. The dataset is publicly available at https://huggingface.co/datasets/supermodelresearch/Re-LAION-Caption19M.
Nicholas Merchant、Haitz Sáez de Ocáriz Borde、Andrei Cristian Popescu、Carlos Garcia Jurado Suarez
计算技术、计算机技术
Nicholas Merchant,Haitz Sáez de Ocáriz Borde,Andrei Cristian Popescu,Carlos Garcia Jurado Suarez.Structured Captions Improve Prompt Adherence in Text-to-Image Models (Re-LAION-Caption 19M)[EB/OL].(2025-07-07)[2025-07-19].https://arxiv.org/abs/2507.05300.点此复制
评论