|国家预印本平台
首页|A Training-Free Style-Personalization via Scale-wise Autoregressive Model

A Training-Free Style-Personalization via Scale-wise Autoregressive Model

A Training-Free Style-Personalization via Scale-wise Autoregressive Model

来源:Arxiv_logoArxiv
英文摘要

We present a training-free framework for style-personalized image generation that controls content and style information during inference using a scale-wise autoregressive model. Our method employs a three-path design--content, style, and generation--each guided by a corresponding text prompt, enabling flexible and efficient control over image semantics without any additional training. A central contribution of this work is a step-wise and attention-wise intervention analysis. Through systematic prompt and feature injection, we find that early-to-middle generation steps play a pivotal role in shaping both content and style, and that query features predominantly encode content-specific information. Guided by these insights, we introduce two targeted mechanisms: Key Stage Attention Sharing, which aligns content and style during the semantically critical steps, and Adaptive Query Sharing, which reinforces content semantics in later steps through similarity-aware query blending. Extensive experiments demonstrate that our method achieves competitive style fidelity and prompt fidelity compared to fine-tuned baselines, while offering faster inference and greater deployment flexibility.

Kyoungmin Lee、Jihun Park、Jongmin Gim、Wonhyeok Choi、Kyumin Hwang、Jaeyeul Kim、Sunghoon Im

计算技术、计算机技术

Kyoungmin Lee,Jihun Park,Jongmin Gim,Wonhyeok Choi,Kyumin Hwang,Jaeyeul Kim,Sunghoon Im.A Training-Free Style-Personalization via Scale-wise Autoregressive Model[EB/OL].(2025-07-06)[2025-07-25].https://arxiv.org/abs/2507.04482.点此复制

评论