Evaluating SAE interpretability without explanations
Evaluating SAE interpretability without explanations
Sparse autoencoders (SAEs) and transcoders have become important tools for machine learning interpretability. However, measuring how interpretable they are remains challenging, with weak consensus about which benchmarks to use. Most evaluation procedures start by producing a single-sentence explanation for each latent. These explanations are then evaluated based on how well they enable an LLM to predict the activation of a latent in new contexts. This method makes it difficult to disentangle the explanation generation and evaluation process from the actual interpretability of the latents discovered. In this work, we adapt existing methods to assess the interpretability of sparse coders, with the advantage that they do not require generating natural language explanations as an intermediate step. This enables a more direct and potentially standardized assessment of interpretability. Furthermore, we compare the scores produced by our interpretability metrics with human evaluations across similar tasks and varying setups, offering suggestions for the community on improving the evaluation of these techniques.
Gonçalo Paulo、Nora Belrose
计算技术、计算机技术
Gonçalo Paulo,Nora Belrose.Evaluating SAE interpretability without explanations[EB/OL].(2025-07-11)[2025-07-25].https://arxiv.org/abs/2507.08473.点此复制
评论