|国家预印本平台
首页|Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning

Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning

Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning

来源:Arxiv_logoArxiv
英文摘要

Sparse autoencoders are a promising new approach for decomposing language model activations for interpretation and control. They have been applied successfully to vision transformer image encoders and to small-scale diffusion models. Inference-Time Decomposition of Activations (ITDA) is a recently proposed variant of dictionary learning that takes the dictionary to be a set of data points from the activation distribution and reconstructs them with gradient pursuit. We apply Sparse Autoencoders (SAEs) and ITDA to a large text-to-image diffusion model, Flux 1, and consider the interpretability of embeddings of both by introducing a visual automated interpretation pipeline. We find that SAEs accurately reconstruct residual stream embeddings and beat MLP neurons on interpretability. We are able to use SAE features to steer image generation through activation addition. We find that ITDA has comparable interpretability to SAEs.

Stepan Shabalin、Ayush Panda、Dmitrii Kharlapenko、Abdur Raheem Ali、Yixiong Hao、Arthur Conmy

计算技术、计算机技术

Stepan Shabalin,Ayush Panda,Dmitrii Kharlapenko,Abdur Raheem Ali,Yixiong Hao,Arthur Conmy.Interpreting Large Text-to-Image Diffusion Models with Dictionary Learning[EB/OL].(2025-05-30)[2025-06-16].https://arxiv.org/abs/2505.24360.点此复制

评论