|国家预印本平台
首页|Alchemist: Turning Public Text-to-Image Data into Generative Gold

Alchemist: Turning Public Text-to-Image Data into Generative Gold

Alchemist: Turning Public Text-to-Image Data into Generative Gold

来源:Arxiv_logoArxiv
英文摘要

Pre-training equips text-to-image (T2I) models with broad world knowledge, but this alone is often insufficient to achieve high aesthetic quality and alignment. Consequently, supervised fine-tuning (SFT) is crucial for further refinement. However, its effectiveness highly depends on the quality of the fine-tuning dataset. Existing public SFT datasets frequently target narrow domains (e.g., anime or specific art styles), and the creation of high-quality, general-purpose SFT datasets remains a significant challenge. Current curation methods are often costly and struggle to identify truly impactful samples. This challenge is further complicated by the scarcity of public general-purpose datasets, as leading models often rely on large, proprietary, and poorly documented internal data, hindering broader research progress. This paper introduces a novel methodology for creating general-purpose SFT datasets by leveraging a pre-trained generative model as an estimator of high-impact training samples. We apply this methodology to construct and release Alchemist, a compact (3,350 samples) yet highly effective SFT dataset. Experiments demonstrate that Alchemist substantially improves the generative quality of five public T2I models while preserving diversity and style. Additionally, we release the fine-tuned models' weights to the public.

Valerii Startsev、Alexander Ustyuzhanin、Alexey Kirillov、Dmitry Baranchuk、Sergey Kastryulin

计算技术、计算机技术

Valerii Startsev,Alexander Ustyuzhanin,Alexey Kirillov,Dmitry Baranchuk,Sergey Kastryulin.Alchemist: Turning Public Text-to-Image Data into Generative Gold[EB/OL].(2025-05-25)[2025-06-29].https://arxiv.org/abs/2505.19297.点此复制

评论