|国家预印本平台
首页|Deep neural networks can provably solve Bellman equations for Markov decision processes without the curse of dimensionality

Deep neural networks can provably solve Bellman equations for Markov decision processes without the curse of dimensionality

Deep neural networks can provably solve Bellman equations for Markov decision processes without the curse of dimensionality

来源:Arxiv_logoArxiv
英文摘要

Discrete time stochastic optimal control problems and Markov decision processes (MDPs) are fundamental models for sequential decision-making under uncertainty and as such provide the mathematical framework underlying reinforcement learning theory. A central tool for solving MDPs is the Bellman equation and its solution, the so-called $Q$-function. In this article, we construct deep neural network (DNN) approximations for $Q$-functions associated to MDPs with infinite time horizon and finite control set $A$. More specifically, we show that if the the payoff function and the random transition dynamics of the MDP can be suitably approximated by DNNs with leaky rectified linear unit (ReLU) activation, then the solutions $Q_d\colon \mathbb R^d\to \mathbb R^{|A|}$, $d\in \mathbb{N}$, of the associated Bellman equations can also be approximated in the $L^2$-sense by DNNs with leaky ReLU activation whose numbers of parameters grow at most polynomially in both the dimension $d\in \mathbb{N}$ of the state space and the reciprocal $1/\varepsilon$ of the prescribed error $\varepsilon\in (0,1)$. Our proof relies on the recently introduced full-history recursive multilevel fixed-point (MLFP) approximation scheme.

Arnulf Jentzen、Konrad Kleinberg、Thomas Kruse

计算技术、计算机技术

Arnulf Jentzen,Konrad Kleinberg,Thomas Kruse.Deep neural networks can provably solve Bellman equations for Markov decision processes without the curse of dimensionality[EB/OL].(2025-06-28)[2025-07-16].https://arxiv.org/abs/2506.22851.点此复制

评论