Multiplicative Rewards in Markovian Models
Multiplicative Rewards in Markovian Models
This paper studies the expected value of multiplicative rewards, where rewards obtained in each step are multiplied (instead of the usual addition), in Markov chains (MCs) and Markov decision processes (MDPs). One of the key differences to additive rewards is that the expected value may diverge to infinity not only due to recurrent, but also due to transient states. For MCs, computing the value is shown to be possible in polynomial time given an oracle for the comparison of succinctly represented integers (CSRI), which is only known to be solvable in polynomial time subject to number-theoretic conjectures. Interestingly, distinguishing whether the value is infinite or 0 is at least as hard as CSRI, while determining if it is one of these two can be done in polynomial time. In MDPs, the optimal value can be computed in polynomial space. Further refined complexity results and results on the complexity of optimal schedulers are presented. The techniques developed for MDPs additionally allow to solve the multiplicative variant of the stochastic shortest path problem. Finally, for MCs and MDPs where an absorbing state is reached almost surely, all considered problems are solvable in polynomial time.
Tobias Meggendorfer、Krishnendu Chatterjee、Jakob Piribauer、Christel Baier
数学计算技术、计算机技术
Tobias Meggendorfer,Krishnendu Chatterjee,Jakob Piribauer,Christel Baier.Multiplicative Rewards in Markovian Models[EB/OL].(2025-06-23)[2025-06-27].https://arxiv.org/abs/2504.18277.点此复制
评论