|国家预印本平台
首页|Agnostic Reinforcement Learning: Foundations and Algorithms

Agnostic Reinforcement Learning: Foundations and Algorithms

Agnostic Reinforcement Learning: Foundations and Algorithms

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning (RL) has demonstrated tremendous empirical success across numerous challenging domains. However, we lack a strong theoretical understanding of the statistical complexity of RL in environments with large state spaces, where function approximation is required for sample-efficient learning. This thesis addresses this gap by rigorously examining the statistical complexity of RL with function approximation from a learning theoretic perspective. Departing from a long history of prior work, we consider the weakest form of function approximation, called agnostic policy learning, in which the learner seeks to find the best policy in a given class $\Pi$, with no guarantee that $\Pi$ contains an optimal policy for the underlying task. We systematically explore agnostic policy learning along three key axes: environment access -- how a learner collects data from the environment; coverage conditions -- intrinsic properties of the underlying MDP measuring the expansiveness of state-occupancy measures for policies in the class $\Pi$, and representational conditions -- structural assumptions on the class $\Pi$ itself. Within this comprehensive framework, we (1) design new learning algorithms with theoretical guarantees and (2) characterize fundamental performance bounds of any algorithm. Our results reveal significant statistical separations that highlight the power and limitations of agnostic policy learning.

Gene Li

自动化基础理论

Gene Li.Agnostic Reinforcement Learning: Foundations and Algorithms[EB/OL].(2025-06-02)[2025-06-22].https://arxiv.org/abs/2506.01884.点此复制

评论