|国家预印本平台
首页|Localizing Knowledge in Diffusion Transformers

Localizing Knowledge in Diffusion Transformers

Localizing Knowledge in Diffusion Transformers

来源:Arxiv_logoArxiv
英文摘要

Understanding how knowledge is distributed across the layers of generative models is crucial for improving interpretability, controllability, and adaptation. While prior work has explored knowledge localization in UNet-based architectures, Diffusion Transformer (DiT)-based models remain underexplored in this context. In this paper, we propose a model- and knowledge-agnostic method to localize where specific types of knowledge are encoded within the DiT blocks. We evaluate our method on state-of-the-art DiT-based models, including PixArt-alpha, FLUX, and SANA, across six diverse knowledge categories. We show that the identified blocks are both interpretable and causally linked to the expression of knowledge in generated outputs. Building on these insights, we apply our localization framework to two key applications: model personalization and knowledge unlearning. In both settings, our localized fine-tuning approach enables efficient and targeted updates, reducing computational cost, improving task-specific performance, and better preserving general model behavior with minimal interference to unrelated or surrounding content. Overall, our findings offer new insights into the internal structure of DiTs and introduce a practical pathway for more interpretable, efficient, and controllable model editing.

Arman Zarei、Samyadeep Basu、Keivan Rezaei、Zihao Lin、Sayan Nag、Soheil Feizi

自然科学研究方法信息科学、信息技术

Arman Zarei,Samyadeep Basu,Keivan Rezaei,Zihao Lin,Sayan Nag,Soheil Feizi.Localizing Knowledge in Diffusion Transformers[EB/OL].(2025-05-24)[2025-06-22].https://arxiv.org/abs/2505.18832.点此复制

评论