|国家预印本平台
首页|Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation

Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation

Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation

来源:Arxiv_logoArxiv
英文摘要

Recent advances in text-to-image generative models have enabled numerous practical applications, including subject-driven generation, which fine-tunes pretrained models to capture subject semantics from only a few examples. While diffusion-based models produce high-quality images, their extensive denoising steps result in significant computational overhead, limiting real-world applicability. Visual autoregressive~(VAR) models, which predict next-scale tokens rather than spatially adjacent ones, offer significantly faster inference suitable for practical deployment. In this paper, we propose the first VAR-based approach for subject-driven generation. However, na\"{\i}ve fine-tuning VAR leads to computational overhead, language drift, and reduced diversity. To address these challenges, we introduce selective layer tuning to reduce complexity and prior distillation to mitigate language drift. Additionally, we found that the early stages have a greater influence on the generation of subject than the latter stages, which merely synthesize local details. Based on this finding, we propose scale-wise weighted tuning, which prioritizes coarser resolutions for promoting the model to focus on the subject-relevant information instead of local details. Extensive experiments validate that our method significantly outperforms diffusion-based baselines across various metrics and demonstrates its practical usage.

Jiwoo Chung、Sangeek Hyun、Hyunjun Kim、Eunseo Koh、MinKyu Lee、Jae-Pil Heo

自然科学研究方法信息科学、信息技术

Jiwoo Chung,Sangeek Hyun,Hyunjun Kim,Eunseo Koh,MinKyu Lee,Jae-Pil Heo.Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation[EB/OL].(2025-04-03)[2025-05-03].https://arxiv.org/abs/2504.02612.点此复制

评论