|国家预印本平台
首页|Data-Driven Exploration for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems

Data-Driven Exploration for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems

Data-Driven Exploration for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems

来源:Arxiv_logoArxiv
英文摘要

We study reinforcement learning (RL) for the same class of continuous-time stochastic linear--quadratic (LQ) control problems as in \cite{huang2024sublinear}, where volatilities depend on both states and controls while states are scalar-valued and running control rewards are absent. We propose a model-free, data-driven exploration mechanism that adaptively adjusts entropy regularization by the critic and policy variance by the actor. Unlike the constant or deterministic exploration schedules employed in \cite{huang2024sublinear}, which require extensive tuning for implementations and ignore learning progresses during iterations, our adaptive exploratory approach boosts learning efficiency with minimal tuning. Despite its flexibility, our method achieves a sublinear regret bound that matches the best-known model-free results for this class of LQ problems, which were previously derived only with fixed exploration schedules. Numerical experiments demonstrate that adaptive explorations accelerate convergence and improve regret performance compared to the non-adaptive model-free and model-based counterparts.

Yilie Huang、Xun Yu Zhou

自动化基础理论

Yilie Huang,Xun Yu Zhou.Data-Driven Exploration for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems[EB/OL].(2025-07-01)[2025-07-16].https://arxiv.org/abs/2507.00358.点此复制

评论