|国家预印本平台
首页|Teach Old SAEs New Domain Tricks with Boosting

Teach Old SAEs New Domain Tricks with Boosting

Teach Old SAEs New Domain Tricks with Boosting

来源:Arxiv_logoArxiv
英文摘要

Sparse Autoencoders have emerged as powerful tools for interpreting the internal representations of Large Language Models, yet they often fail to capture domain-specific features not prevalent in their training corpora. This paper introduces a residual learning approach that addresses this feature blindness without requiring complete retraining. We propose training a secondary SAE specifically to model the reconstruction error of a pretrained SAE on domain-specific texts, effectively capturing features missed by the primary model. By summing the outputs of both models during inference, we demonstrate significant improvements in both LLM cross-entropy and explained variance metrics across multiple specialized domains. Our experiments show that this method efficiently incorporates new domain knowledge into existing SAEs while maintaining their performance on general tasks. This approach enables researchers to selectively enhance SAE interpretability for specific domains of interest, opening new possibilities for targeted mechanistic interpretability of LLMs.

Nikita Koriagin、Yaroslav Aksenov、Daniil Laptev、Gleb Gerasimov、Nikita Balagansky、Daniil Gavrilov

计算技术、计算机技术

Nikita Koriagin,Yaroslav Aksenov,Daniil Laptev,Gleb Gerasimov,Nikita Balagansky,Daniil Gavrilov.Teach Old SAEs New Domain Tricks with Boosting[EB/OL].(2025-07-17)[2025-08-10].https://arxiv.org/abs/2507.12990.点此复制

评论