|国家预印本平台
首页|Preference-based Multi-Objective Reinforcement Learning

Preference-based Multi-Objective Reinforcement Learning

Preference-based Multi-Objective Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Multi-objective reinforcement learning (MORL) is a structured approach for optimizing tasks with multiple objectives. However, it often relies on pre-defined reward functions, which can be hard to design for balancing conflicting goals and may lead to oversimplification. Preferences can serve as more flexible and intuitive decision-making guidance, eliminating the need for complicated reward design. This paper introduces preference-based MORL (Pb-MORL), which formalizes the integration of preferences into the MORL framework. We theoretically prove that preferences can derive policies across the entire Pareto frontier. To guide policy optimization using preferences, our method constructs a multi-objective reward model that aligns with the given preferences. We further provide theoretical proof to show that optimizing this reward model is equivalent to training the Pareto optimal policy. Extensive experiments in benchmark multi-objective tasks, a multi-energy management task, and an autonomous driving task on a multi-line highway show that our method performs competitively, surpassing the oracle method, which uses the ground truth reward function. This highlights its potential for practical applications in complex real-world systems.

Ni Mu、Yao Luan、Qing-Shan Jia

10.1109/TASE.2025.3589271

自动化基础理论计算技术、计算机技术

Ni Mu,Yao Luan,Qing-Shan Jia.Preference-based Multi-Objective Reinforcement Learning[EB/OL].(2025-07-18)[2025-08-18].https://arxiv.org/abs/2507.14066.点此复制

评论