|国家预印本平台
首页|Constrained Average-Reward Intermittently Observable MDPs

Constrained Average-Reward Intermittently Observable MDPs

Constrained Average-Reward Intermittently Observable MDPs

来源:Arxiv_logoArxiv
英文摘要

In Markov Decision Processes (MDPs) with intermittent state information, decision-making becomes challenging due to periods of missing observations. Linear programming (LP) methods can play a crucial role in solving MDPs, in particular, with constraints. However, the resultant belief MDPs lead to infinite dimensional LPs, even when the original MDP is with a finite state and action spaces. The verification of strong duality becomes non-trivial. This paper investigates the conditions for no duality gap in average-reward finite Markov decision process with intermittent state observations. We first establish that in such MDPs, the belief MDP is unichain if the original Markov chain is recurrent. Furthermore, we establish strong duality of the problem, under the same assumption. Finally, we provide a wireless channel example, where the belief state depends on the last channel state received and the age of the channel state. Our numerical results indicate interesting properties of the solution.

Konstantin Avrachenkov、Madhu Dhiman、Veeraruna Kavitha

自动化基础理论

Konstantin Avrachenkov,Madhu Dhiman,Veeraruna Kavitha.Constrained Average-Reward Intermittently Observable MDPs[EB/OL].(2025-04-18)[2025-05-08].https://arxiv.org/abs/2504.13823.点此复制

评论