FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning
FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning
Multi-objective reinforcement learning (MORL) aims to optimize policies in the presence of conflicting objectives, where linear scalarization is commonly used to reduce vector-valued returns into scalar signals. While effective for certain preferences, this approach cannot capture fairness-oriented goals such as Nash social welfare or max-min fairness, which require nonlinear and non-additive trade-offs. Although several online algorithms have been proposed for specific fairness objectives, a unified approach for optimizing nonlinear welfare criteria in the offline setting-where learning must proceed from a fixed dataset-remains unexplored. In this work, we present FairDICE, the first offline MORL framework that directly optimizes nonlinear welfare objective. FairDICE leverages distribution correction estimation to jointly account for welfare maximization and distributional regularization, enabling stable and sample-efficient learning without requiring explicit preference weights or exhaustive weight search. Across multiple offline benchmarks, FairDICE demonstrates strong fairness-aware performance compared to existing baselines.
Woosung Kim、Jinho Lee、Jongmin Lee、Byung-Jun Lee
计算技术、计算机技术
Woosung Kim,Jinho Lee,Jongmin Lee,Byung-Jun Lee.FairDICE: Fairness-Driven Offline Multi-Objective Reinforcement Learning[EB/OL].(2025-06-09)[2025-06-21].https://arxiv.org/abs/2506.08062.点此复制
评论