One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models
One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models
For large language models (LLMs), sparse autoencoders (SAEs) have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lacking for text-to-image models. We investigate the possibility of using SAEs to learn interpretable features for SDXL Turbo, a few-step text-to-image diffusion model. To this end, we train SAEs on the updates performed by transformer blocks within SDXL Turbo's denoising U-net in its 1-step setting. Interestingly, we find that they generalize to 4-step SDXL Turbo and even to the multi-step SDXL base model (i.e., a different model) without additional training. In addition, we show that their learned features are interpretable, causally influence the generation process, and reveal specialization among the blocks. We do so by creating RIEBench, a representation-based image editing benchmark, for editing images while they are generated by turning on and off individual SAE features. This allows us to track which transformer blocks' features are the most impactful depending on the edit category. Our work is the first investigation of SAEs for interpretability in text-to-image diffusion models and our results establish SAEs as a promising approach for understanding and manipulating the internal mechanisms of text-to-image models.
Caglar Gulcehre、Robert West、Mikhail Terekhov、Justin Deschenaux、Viacheslav Surkov、Chris Wendler、Antonio Mari、David Bau
计算技术、计算机技术
Caglar Gulcehre,Robert West,Mikhail Terekhov,Justin Deschenaux,Viacheslav Surkov,Chris Wendler,Antonio Mari,David Bau.One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models[EB/OL].(2025-06-22)[2025-07-16].https://arxiv.org/abs/2410.22366.点此复制
评论