Towards Reliable, Uncertainty-Aware Alignment
Towards Reliable, Uncertainty-Aware Alignment
Alignment of large language models (LLMs) typically involves training a reward model on preference data, followed by policy optimization with respect to the reward model. However, optimizing policies with respect to a single reward model estimate can render it vulnerable to inaccuracies in the reward model. We empirically study the variability of reward model training on open-source benchmarks. We observe that independently trained reward models on the same preference dataset can exhibit substantial disagreement, highlighting the instability of current alignment strategies. Employing a theoretical model, we demonstrate that variability in reward model estimation can cause overfitting, leading to the risk of performance degradation. To mitigate this risk, we propose a variance-aware policy optimization framework for preference-based alignment. The key ingredient of the framework is a new policy regularizer that incorporates reward model variance estimates. We show that variance-aware policy optimization provably reduces the risk of outputting a worse policy than the default. Experiments across diverse LLM and reward model configurations confirm that our approach yields more stable and robust alignment than the standard (variance-unaware) pipeline.
Debangshu Banerjee、Kintan Saha、Aditya Gopalan
计算技术、计算机技术
Debangshu Banerjee,Kintan Saha,Aditya Gopalan.Towards Reliable, Uncertainty-Aware Alignment[EB/OL].(2025-07-21)[2025-08-10].https://arxiv.org/abs/2507.15906.点此复制
评论