|国家预印本平台
首页|Reconciling Discrete-Time Mixed Policies and Continuous-Time Relaxed Controls in Reinforcement Learning and Stochastic Control

Reconciling Discrete-Time Mixed Policies and Continuous-Time Relaxed Controls in Reinforcement Learning and Stochastic Control

Reconciling Discrete-Time Mixed Policies and Continuous-Time Relaxed Controls in Reinforcement Learning and Stochastic Control

来源:Arxiv_logoArxiv
英文摘要

Reinforcement learning (RL) is currently one of the most popular methods, with breakthrough results in a variety of fields. The framework relies on the concept of Markov decision process (MDP), which corresponds to a discrete time optimal control problem. In the RL literature, such problems are usually formulated with mixed policies, from which a random action is sampled at each time step. Recently, the optimal control community has studied continuous-time versions of RL algorithms, replacing MDPs with mixed policies by continuous time stochastic processes with relaxed controls. In this work, we rigorously connect the two problems: we prove the strong convergence of the former towards the latter when the time discretization goes to $0$.

Rene Carmona、Mathieu Lauriere

自动化基础理论计算技术、计算机技术

Rene Carmona,Mathieu Lauriere.Reconciling Discrete-Time Mixed Policies and Continuous-Time Relaxed Controls in Reinforcement Learning and Stochastic Control[EB/OL].(2025-04-30)[2025-05-22].https://arxiv.org/abs/2504.21793.点此复制

评论