Mechanistic Interpretability of Diffusion Models: Circuit-Level Analysis and Causal Validation
Mechanistic Interpretability of Diffusion Models: Circuit-Level Analysis and Causal Validation
We present a quantitative circuit-level analysis of diffusion models, establishing computational pathways and mechanistic principles underlying image generation processes. Through systematic intervention experiments across 2,000 synthetic and 2,000 CelebA facial images, we discover fundamental algorithmic differences in how diffusion architectures process synthetic versus naturalistic data distributions. Our investigation reveals that real-world face processing requires circuits with measurably higher computational complexity (complexity ratio = 1.084 plus/minus 0.008, p < 0.001), exhibiting distinct attention specialization patterns with entropy divergence ranging from 0.015 to 0.166 across denoising timesteps. We identify eight functionally distinct attention mechanisms showing specialized computational roles: edge detection (entropy = 3.18 plus/minus 0.12), texture analysis (entropy = 4.16 plus/minus 0.08), and semantic understanding (entropy = 2.67 plus/minus 0.15). Intervention analysis demonstrates critical computational bottlenecks where targeted ablations produce 25.6% to 128.3% performance degradation, providing causal evidence for identified circuit functions. These findings establish quantitative foundations for algorithmic understanding and control of generative model behavior through mechanistic intervention strategies.
Dip Roy
信息科学、信息技术计算技术、计算机技术自然科学研究方法
Dip Roy.Mechanistic Interpretability of Diffusion Models: Circuit-Level Analysis and Causal Validation[EB/OL].(2025-06-04)[2025-07-02].https://arxiv.org/abs/2506.17237.点此复制
评论