Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL
Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL
We study offline reinforcement learning in average-reward MDPs, which presents increased challenges from the perspectives of distribution shift and non-uniform coverage, and has been relatively underexamined from a theoretical perspective. While previous work obtains performance guarantees under single-policy data coverage assumptions, such guarantees utilize additional complexity measures which are uniform over all policies, such as the uniform mixing time. We develop sharp guarantees depending only on the target policy, specifically the bias span and a novel policy hitting radius, yielding the first fully single-policy sample complexity bound for average-reward offline RL. We are also the first to handle general weakly communicating MDPs, contrasting restrictive structural assumptions made in prior work. To achieve this, we introduce an algorithm based on pessimistic discounted value iteration enhanced by a novel quantile clipping technique, which enables the use of a sharper empirical-span-based penalty function. Our algorithm also does not require any prior parameter knowledge for its implementation. Remarkably, we show via hard examples that learning under our conditions requires coverage assumptions beyond the stationary distribution of the target policy, distinguishing single-policy complexity measures from previously examined cases. We also develop lower bounds nearly matching our main result.
Matthew Zurek、Guy Zamir、Yudong Chen
计算技术、计算机技术
Matthew Zurek,Guy Zamir,Yudong Chen.Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL[EB/OL].(2025-06-26)[2025-07-09].https://arxiv.org/abs/2506.20904.点此复制
评论