|国家预印本平台
首页|Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability

Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability

Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning from Human Feedback (RLHF) is central in aligning large language models (LLMs) with human values and expectations. However, the process remains susceptible to governance challenges, including evaluator bias, inconsistency, and the unreliability of feedback. This study examines how the cognitive capacity of evaluators, specifically their level of rationality, affects the stability of reinforcement signals. A controlled experiment comparing high-rationality and low-rationality participants reveals that evaluators with higher rationality scores produce significantly more consistent and expert-aligned feedback. In contrast, lower-rationality participants demonstrate considerable variability in their reinforcement decisions ($p < 0.01$). To address these challenges and improve RLHF governance, we recommend implementing evaluator pre-screening, systematic auditing of feedback consistency, and reliability-weighted reinforcement aggregation. These measures enhance the fairness, transparency, and robustness of AI alignment pipelines.

Dana Alsagheer、Abdulrahman Kamal、Mohammad Kamal、Weidong Shi

计算技术、计算机技术

Dana Alsagheer,Abdulrahman Kamal,Mohammad Kamal,Weidong Shi.Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability[EB/OL].(2025-04-17)[2025-04-29].https://arxiv.org/abs/2504.13972.点此复制

评论