|国家预印本平台
首页|Faster Fixed-Point Methods for Multichain MDPs

Faster Fixed-Point Methods for Multichain MDPs

Faster Fixed-Point Methods for Multichain MDPs

来源:Arxiv_logoArxiv
英文摘要

We study value-iteration (VI) algorithms for solving general (a.k.a. multichain) Markov decision processes (MDPs) under the average-reward criterion, a fundamental but theoretically challenging setting. Beyond the difficulties inherent to all average-reward problems posed by the lack of contractivity and non-uniqueness of solutions to the Bellman operator, in the multichain setting an optimal policy must solve the navigation subproblem of steering towards the best connected component, in addition to optimizing long-run performance within each component. We develop algorithms which better solve this navigational subproblem in order to achieve faster convergence for multichain MDPs, obtaining improved rates of convergence and sharper measures of complexity relative to prior work. Many key components of our results are of potential independent interest, including novel connections between average-reward and discounted problems, optimal fixed-point methods for discounted VI which extend to general Banach spaces, new sublinear convergence rates for the discounted value error, and refined suboptimality decompositions for multichain MDPs. Overall our results yield faster convergence rates for discounted and average-reward problems and expand the theoretical foundations of VI approaches.

Yudong Chen、Matthew Zurek

自动化基础理论计算技术、计算机技术

Yudong Chen,Matthew Zurek.Faster Fixed-Point Methods for Multichain MDPs[EB/OL].(2025-06-26)[2025-07-21].https://arxiv.org/abs/2506.20910.点此复制

评论