|国家预印本平台
首页|Reward Models in Deep Reinforcement Learning: A Survey

Reward Models in Deep Reinforcement Learning: A Survey

Reward Models in Deep Reinforcement Learning: A Survey

来源:Arxiv_logoArxiv
英文摘要

In reinforcement learning (RL), agents continually interact with the environment and use the feedback to refine their behavior. To guide policy optimization, reward models are introduced as proxies of the desired objectives, such that when the agent maximizes the accumulated reward, it also fulfills the task designer's intentions. Recently, significant attention from both academic and industrial researchers has focused on developing reward models that not only align closely with the true objectives but also facilitate policy optimization. In this survey, we provide a comprehensive review of reward modeling techniques within the deep RL literature. We begin by outlining the background and preliminaries in reward modeling. Next, we present an overview of recent reward modeling approaches, categorizing them based on the source, the mechanism, and the learning paradigm. Building on this understanding, we discuss various applications of these reward modeling techniques and review methods for evaluating reward models. Finally, we conclude by highlighting promising research directions in reward modeling. Altogether, this survey includes both established and emerging methods, filling the vacancy of a systematic review of reward models in current literature.

Rui Yu、Shenghua Wan、Yucen Wang、Chen-Xiao Gao、Le Gan、Zongzhang Zhang、De-Chuan Zhan

计算技术、计算机技术

Rui Yu,Shenghua Wan,Yucen Wang,Chen-Xiao Gao,Le Gan,Zongzhang Zhang,De-Chuan Zhan.Reward Models in Deep Reinforcement Learning: A Survey[EB/OL].(2025-06-18)[2025-07-02].https://arxiv.org/abs/2506.15421.点此复制

评论