|国家预印本平台
首页|Adaptive Sampling for Best Policy Identification in Markov Decision Processes

Adaptive Sampling for Best Policy Identification in Markov Decision Processes

Adaptive Sampling for Best Policy Identification in Markov Decision Processes

来源:Arxiv_logoArxiv
英文摘要

We investigate the problem of best-policy identification in discounted Markov Decision Processes (MDPs) when the learner has access to a generative model. The objective is to devise a learning algorithm returning the best policy as early as possible. We first derive a problem-specific lower bound of the sample complexity satisfied by any learning algorithm. This lower bound corresponds to an optimal sample allocation that solves a non-convex program, and hence, is hard to exploit in the design of efficient algorithms. We then provide a simple and tight upper bound of the sample complexity lower bound, whose corresponding nearly-optimal sample allocation becomes explicit. The upper bound depends on specific functionals of the MDP such as the sub-optimality gaps and the variance of the next-state value function, and thus really captures the hardness of the MDP. Finally, we devise KLB-TS (KL Ball Track-and-Stop), an algorithm tracking this nearly-optimal allocation, and provide asymptotic guarantees for its sample complexity (both almost surely and in expectation). The advantages of KLB-TS against state-of-the-art algorithms are discussed and illustrated numerically.

Aymen Al Marjani、Alexandre Proutiere

计算技术、计算机技术

Aymen Al Marjani,Alexandre Proutiere.Adaptive Sampling for Best Policy Identification in Markov Decision Processes[EB/OL].(2020-09-28)[2025-08-04].https://arxiv.org/abs/2009.13405.点此复制

评论