|国家预印本平台
首页|Ensembling Sparse Autoencoders

Ensembling Sparse Autoencoders

Ensembling Sparse Autoencoders

来源:Arxiv_logoArxiv
英文摘要

Sparse autoencoders (SAEs) are used to decompose neural network activations into human-interpretable features. Typically, features learned by a single SAE are used for downstream applications. However, it has recently been shown that SAEs trained with different initial weights can learn different features, demonstrating that a single SAE captures only a limited subset of features that can be extracted from the activation space. Motivated by this limitation, we propose to ensemble multiple SAEs through naive bagging and boosting. Specifically, SAEs trained with different weight initializations are ensembled in naive bagging, whereas SAEs sequentially trained to minimize the residual error are ensembled in boosting. We evaluate our ensemble approaches with three settings of language models and SAE architectures. Our empirical results demonstrate that ensembling SAEs can improve the reconstruction of language model activations, diversity of features, and SAE stability. Furthermore, ensembling SAEs performs better than applying a single SAE on downstream tasks such as concept detection and spurious correlation removal, showing improved practical utility.

Soham Gadgil、Chris Lin、Su-In Lee

计算技术、计算机技术

Soham Gadgil,Chris Lin,Su-In Lee.Ensembling Sparse Autoencoders[EB/OL].(2025-05-21)[2025-06-18].https://arxiv.org/abs/2505.16077.点此复制

评论