|国家预印本平台
首页|CRED: Counterfactual Reasoning and Environment Design for Active Preference Learning

CRED: Counterfactual Reasoning and Environment Design for Active Preference Learning

CRED: Counterfactual Reasoning and Environment Design for Active Preference Learning

来源:Arxiv_logoArxiv
英文摘要

For effective real-world deployment, robots should adapt to human preferences, such as balancing distance, time, and safety in delivery routing. Active preference learning (APL) learns human reward functions by presenting trajectories for ranking. However, existing methods often struggle to explore the full trajectory space and fail to identify informative queries, particularly in long-horizon tasks. We propose CRED, a trajectory generation method for APL that improves reward estimation by jointly optimizing environment design and trajectory selection. CRED "imagines" new scenarios through environment design and uses counterfactual reasoning -- by sampling rewards from its current belief and asking "What if this reward were the true preference?" -- to generate a diverse and informative set of trajectories for ranking. Experiments in GridWorld and real-world navigation using OpenStreetMap data show that CRED improves reward learning and generalizes effectively across different environments.

Yi-Shiuan Tung、Bradley Hayes、Alessandro Roncone

计算技术、计算机技术自动化技术、自动化技术设备自动化基础理论

Yi-Shiuan Tung,Bradley Hayes,Alessandro Roncone.CRED: Counterfactual Reasoning and Environment Design for Active Preference Learning[EB/OL].(2025-07-07)[2025-07-18].https://arxiv.org/abs/2507.05458.点此复制

评论