|国家预印本平台
首页|Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation

Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation

Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation

来源:Arxiv_logoArxiv
英文摘要

Continuous-time reinforcement learning (CTRL) provides a principled framework for sequential decision-making in environments where interactions evolve continuously over time. Despite its empirical success, the theoretical understanding of CTRL remains limited, especially in settings with general function approximation. In this work, we propose a model-based CTRL algorithm that achieves both sample and computational efficiency. Our approach leverages optimism-based confidence sets to establish the first sample complexity guarantee for CTRL with general function approximation, showing that a near-optimal policy can be learned with a suboptimality gap of $\tilde{O}(\sqrt{d_{\mathcal{R}} + d_{\mathcal{F}}}N^{-1/2})$ using $N$ measurements, where $d_{\mathcal{R}}$ and $d_{\mathcal{F}}$ denote the distributional Eluder dimensions of the reward and dynamic functions, respectively, capturing the complexity of general function approximation in reinforcement learning. Moreover, we introduce structured policy updates and an alternative measurement strategy that significantly reduce the number of policy updates and rollouts while maintaining competitive sample efficiency. We implemented experiments to backup our proposed algorithms on continuous control tasks and diffusion model fine-tuning, demonstrating comparable performance with significantly fewer policy updates and rollouts.

Runze Zhao、Yue Yu、Adams Yiyue Zhu、Chen Yang、Dongruo Zhou

计算技术、计算机技术

Runze Zhao,Yue Yu,Adams Yiyue Zhu,Chen Yang,Dongruo Zhou.Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation[EB/OL].(2025-05-20)[2025-06-27].https://arxiv.org/abs/2505.14821.点此复制

评论