|国家预印本平台
首页|Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding

Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding

Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding

来源:Arxiv_logoArxiv
英文摘要

Reinforcement learning (RL) for auto-bidding has shifted from using simplistic offline simulators (Simulation-based RL Bidding, SRLB) to offline RL on fixed real datasets (Offline RL Bidding, ORLB). However, ORLB policies are limited by the dataset's state space coverage, offering modest gains. While SRLB expands state coverage, its simulator-reality gap risks misleading policies. This paper introduces Model-based RL Bidding (MRLB), which learns an environment model from real data to bridge this gap. MRLB trains policies using both real and model-generated data, expanding state coverage beyond ORLB. To ensure model reliability, we propose: 1) A permutation equivariant model architecture for better generalization, and 2) A robust offline Q-learning method that pessimistically penalizes model errors. These form the Permutation Equivariant Model-based Offline RL (PE-MORL) algorithm. Real-world experiments show that PE-MORL outperforms state-of-the-art auto-bidding methods.

Zhiyu Mou、Miao Xu、Wei Chen、Rongquan Bai、Chuan Yu、Jian Xu

计算技术、计算机技术

Zhiyu Mou,Miao Xu,Wei Chen,Rongquan Bai,Chuan Yu,Jian Xu.Permutation Equivariant Model-based Offline Reinforcement Learning for Auto-bidding[EB/OL].(2025-06-22)[2025-07-09].https://arxiv.org/abs/2506.17919.点此复制

评论