|国家预印本平台
首页|On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

来源:Arxiv_logoArxiv
英文摘要

We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL). In order to solve such an optimization problem, we apply and analyze the classical stochastic proximal gradient method. In particular, the method has shown to admit an $O(\epsilon^{-4})$ sample complexity to an $\epsilon$-stationary point, under standard conditions. Since the variance of the classical stochastic gradient estimator is typically large, which slows down the convergence, we also apply an efficient stochastic variance-reduce proximal gradient method with an importance sampling based ProbAbilistic Gradient Estimator (PAGE). Our analysis shows that the sample complexity can be improved from $O(\epsilon^{-4})$ to $O(\epsilon^{-3})$ under additional conditions. Our results on the stochastic (variance-reduced) proximal gradient method match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings. To the best of our knowledge, the proposed methods represent a novel approach in addressing the general regularized reward optimization problem.

Haizhao Yang、Ling Liang

计算技术、计算机技术自动化基础理论

Haizhao Yang,Ling Liang.On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization[EB/OL].(2024-01-23)[2025-07-25].https://arxiv.org/abs/2401.12508.点此复制

评论