|国家预印本平台
首页|An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

来源:Arxiv_logoArxiv
英文摘要

Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret O(sqrt(K)) and strong constraint violation O(sqrt(K)) without relying on Slater's condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.

Jiahui Zhu、Kihyun Yu、Dabeen Lee、Xin Liu、Honghao Wei

计算技术、计算机技术

Jiahui Zhu,Kihyun Yu,Dabeen Lee,Xin Liu,Honghao Wei.An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints[EB/OL].(2025-05-27)[2025-06-07].https://arxiv.org/abs/2505.21841.点此复制

评论