Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning
Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning
We study reinforcement learning (RL) in the agnostic policy learning setting, where the goal is to find a policy whose performance is competitive with the best policy in a given class of interest $Î $ -- crucially, without assuming that $Î $ contains the optimal policy. We propose a general policy learning framework that reduces this problem to first-order optimization in a non-Euclidean space, leading to new algorithms as well as shedding light on the convergence properties of existing ones. Specifically, under the assumption that $Î $ is convex and satisfies a variational gradient dominance (VGD) condition -- an assumption known to be strictly weaker than more standard completeness and coverability conditions -- we obtain sample complexity upper bounds for three policy learning algorithms: \emph{(i)} Steepest Descent Policy Optimization, derived from a constrained steepest descent method for non-convex optimization; \emph{(ii)} the classical Conservative Policy Iteration algorithm \citep{kakade2002approximately} reinterpreted through the lens of the Frank-Wolfe method, which leads to improved convergence results; and \emph{(iii)} an on-policy instantiation of the well-studied Policy Mirror Descent algorithm. Finally, we empirically evaluate the VGD condition across several standard environments, demonstrating the practical relevance of our key assumption.
Uri Sherman、Tomer Koren、Yishay Mansour
计算技术、计算机技术
Uri Sherman,Tomer Koren,Yishay Mansour.Convergence and Sample Complexity of First-Order Methods for Agnostic Reinforcement Learning[EB/OL].(2025-07-06)[2025-07-21].https://arxiv.org/abs/2507.04406.点此复制
评论