|国家预印本平台
首页|Interpretable Reward Model via Sparse Autoencoder

Interpretable Reward Model via Sparse Autoencoder

Interpretable Reward Model via Sparse Autoencoder

来源:Arxiv_logoArxiv
英文摘要

Large language models (LLMs) have been widely deployed across numerous fields. Reinforcement Learning from Human Feedback (RLHF) leverages reward models (RMs) as proxies for human preferences to align LLM behaviors with human values, making the accuracy, reliability, and interpretability of RMs critical for effective alignment. However, traditional RMs lack interpretability, offer limited insight into the reasoning behind reward assignments, and are inflexible toward user preference shifts. While recent multidimensional RMs aim for improved interpretability, they often fail to provide feature-level attribution and require costly annotations. To overcome these limitations, we introduce the Sparse Autoencoder-enhanced Reward Model (SARM), a novel architecture that integrates a pretrained Sparse Autoencoder (SAE) into a reward model. SARM maps the hidden activations of LLM-based RM into an interpretable, sparse, and monosemantic feature space, from which a scalar head aggregates feature activations to produce transparent and conceptually meaningful reward scores. Empirical evaluations demonstrate that SARM facilitates direct feature-level attribution of reward assignments, allows dynamic adjustment to preference shifts, and achieves superior alignment performance compared to conventional reward models. Our code is available at https://github.com/schrieffer-z/sarm.

Shuyi Zhang、Wei Shi、Sihang Li、Jiayi Liao、Tao Liang、Hengxing Cai、Xiang Wang

计算技术、计算机技术

Shuyi Zhang,Wei Shi,Sihang Li,Jiayi Liao,Tao Liang,Hengxing Cai,Xiang Wang.Interpretable Reward Model via Sparse Autoencoder[EB/OL].(2025-08-14)[2025-08-24].https://arxiv.org/abs/2508.08746.点此复制

评论