|国家预印本平台
首页|Why is Your Language Model a Poor Implicit Reward Model?

Why is Your Language Model a Poor Implicit Reward Model?

Why is Your Language Model a Poor Implicit Reward Model?

来源:Arxiv_logoArxiv
英文摘要

Reward models are key to language model post-training and inference pipelines. Conveniently, recent work showed that every language model defines an implicit reward model (IM-RM), without requiring any architectural changes. However, such IM-RMs tend to generalize worse, especially out-of-distribution, compared to explicit reward models (EX-RMs) that apply a dedicated linear head over the hidden representations of a language model. The existence of a generalization gap is puzzling, as EX-RMs and IM-RMs are nearly identical. They can be trained using the same data, loss function, and language model, and differ only in how the reward is computed. Towards a fundamental understanding of the implicit biases underlying different reward model types, we investigate the root cause of this gap. Our main finding, backed by theory and experiments, is that IM-RMs rely more heavily on superficial token-level cues. Consequently, they often generalize worse than EX-RMs under token-level distribution shifts, as well as in-distribution. Furthermore, we provide evidence against alternative hypotheses for the generalization gap. Most notably, we challenge the intuitive claim that IM-RMs struggle in tasks where generation is harder than verification because they can operate both as a verifier and a generator. Taken together, our results highlight that seemingly minor design choices can substantially impact the generalization behavior of reward models.

Noam Razin、Yong Lin、Jiarui Yao、Sanjeev Arora

计算技术、计算机技术

Noam Razin,Yong Lin,Jiarui Yao,Sanjeev Arora.Why is Your Language Model a Poor Implicit Reward Model?[EB/OL].(2025-07-10)[2025-07-22].https://arxiv.org/abs/2507.07981.点此复制

评论