SRMIR: Shadow Reward Models Based on Introspective Reasoning for LLM Alignment
SRMIR: Shadow Reward Models Based on Introspective Reasoning for LLM Alignment
Aligning large language models (LLMs) with human preferences and values is vital for application. However, current alignment methods face three main limitations: (1) reliance on costly human annotation; (2) alignment tax; (3) shallow alignment vulnerable to jailbreak attacks. Additionally, current alignment datasets often suffer from uneven distributions, leading to overrepresentation of some topics and neglect of others. To address these issues, we propose SRMIR (Shadow Reward Models Based on Introspective Reasoning), inspired by shadow models in membership inference attacks. We first construct a balanced safety Chain of Draft (CoD) dataset across $7$ harmful types with structured prompt leveraging the introspective reasoning capabilities of LLMs, then train a set of specialized reward models to guide policy optimization through Group Relative Policy Optimization (GRPO). We apply two strategies, linear combination and categorized approach, to integrate shadow reward models for policy optimization. By comparison, we find that the latter achieves superior alignment despite higher computational costs. Experiments across several LLMs demonstrate SRMIR significantly outperforms existing methods.
Ruoxi Cheng、Shuirong Cao
计算技术、计算机技术
Ruoxi Cheng,Shuirong Cao.SRMIR: Shadow Reward Models Based on Introspective Reasoning for LLM Alignment[EB/OL].(2025-03-23)[2025-04-27].https://arxiv.org/abs/2503.18991.点此复制
评论