|国家预印本平台
首页|SplInterp: Improving our Understanding and Training of Sparse Autoencoders

SplInterp: Improving our Understanding and Training of Sparse Autoencoders

SplInterp: Improving our Understanding and Training of Sparse Autoencoders

来源:Arxiv_logoArxiv
英文摘要

Sparse autoencoders (SAEs) have received considerable recent attention as tools for mechanistic interpretability, showing success at extracting interpretable features even from very large LLMs. However, this research has been largely empirical, and there have been recent doubts about the true utility of SAEs. In this work, we seek to enhance the theoretical understanding of SAEs, using the spline theory of deep learning. By situating SAEs in this framework: we discover that SAEs generalise ``$k$-means autoencoders'' to be piecewise affine, but sacrifice accuracy for interpretability vs. the optimal ``$k$-means-esque plus local principal component analysis (PCA)'' piecewise affine autoencoder. We characterise the underlying geometry of (TopK) SAEs using power diagrams. And we develop a novel proximal alternating method SGD (PAM-SGD) algorithm for training SAEs, with both solid theoretical foundations and promising empirical results in MNIST and LLM experiments, particularly in sample efficiency and (in the LLM setting) improved sparsity of codes. All code is available at: https://github.com/splInterp2025/splInterp

Jeremy Budd、Javier Ideami、Benjamin Macdowall Rynne、Keith Duggar、Randall Balestriero

计算技术、计算机技术

Jeremy Budd,Javier Ideami,Benjamin Macdowall Rynne,Keith Duggar,Randall Balestriero.SplInterp: Improving our Understanding and Training of Sparse Autoencoders[EB/OL].(2025-05-17)[2025-07-16].https://arxiv.org/abs/2505.11836.点此复制

评论