Online Markov decision processes with policy iteration
Online Markov decision processes with policy iteration
The online Markov decision process (MDP) is a generalization of the classical Markov decision process that incorporates changing reward functions. In this paper, we propose practical online MDP algorithms with policy iteration and theoretically establish a sublinear regret bound. A notable advantage of the proposed algorithm is that it can be easily combined with function approximation, and thus large and possibly continuous state spaces can be efficiently handled. Through experiments, we demonstrate the usefulness of the proposed algorithm.
Masashi Sugiyama、Hao Zhang、Yao Ma
计算技术、计算机技术
Masashi Sugiyama,Hao Zhang,Yao Ma.Online Markov decision processes with policy iteration[EB/OL].(2015-10-15)[2025-08-02].https://arxiv.org/abs/1510.04454.点此复制
评论