|国家预印本平台
首页|Similarity as Reward Alignment: Robust and Versatile Preference-based Reinforcement Learning

Similarity as Reward Alignment: Robust and Versatile Preference-based Reinforcement Learning

Similarity as Reward Alignment: Robust and Versatile Preference-based Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Preference-based Reinforcement Learning (PbRL) entails a variety of approaches for aligning models with human intent to alleviate the burden of reward engineering. However, most previous PbRL work has not investigated the robustness to labeler errors, inevitable with labelers who are non-experts or operate under time constraints. Additionally, PbRL algorithms often target very specific settings (e.g. pairwise ranked preferences or purely offline learning). We introduce Similarity as Reward Alignment (SARA), a simple contrastive framework that is both resilient to noisy labels and adaptable to diverse feedback formats and training paradigms. SARA learns a latent representation of preferred samples and computes rewards as similarities to the learned latent. We demonstrate strong performance compared to baselines on continuous control offline RL benchmarks. We further demonstrate SARA's versatility in applications such as trajectory filtering for downstream tasks, cross-task preference transfer, and reward shaping in online learning.

Sara Rajaram、R. James Cotton、Fabian H. Sinz

计算技术、计算机技术

Sara Rajaram,R. James Cotton,Fabian H. Sinz.Similarity as Reward Alignment: Robust and Versatile Preference-based Reinforcement Learning[EB/OL].(2025-06-14)[2025-06-30].https://arxiv.org/abs/2506.12529.点此复制

评论