|国家预印本平台
首页|Action Robust Reinforcement Learning via Optimal Adversary Aware Policy Optimization

Action Robust Reinforcement Learning via Optimal Adversary Aware Policy Optimization

Action Robust Reinforcement Learning via Optimal Adversary Aware Policy Optimization

来源:Arxiv_logoArxiv
英文摘要

Reinforcement Learning (RL) has achieved remarkable success in sequential decision tasks. However, recent studies have revealed the vulnerability of RL policies to different perturbations, raising concerns about their effectiveness and safety in real-world applications. In this work, we focus on the robustness of RL policies against action perturbations and introduce a novel framework called Optimal Adversary-aware Policy Iteration (OA-PI). Our framework enhances action robustness under various perturbations by evaluating and improving policy performance against the corresponding optimal adversaries. Besides, our approach can be integrated into mainstream DRL algorithms such as Twin Delayed DDPG (TD3) and Proximal Policy Optimization (PPO), improving action robustness effectively while maintaining nominal performance and sample efficiency. Experimental results across various environments demonstrate that our method enhances robustness of DRL policies against different action adversaries effectively.

Buqing Nie、Yangqing Fu、Jingtian Ji、Yue Gao

计算技术、计算机技术

Buqing Nie,Yangqing Fu,Jingtian Ji,Yue Gao.Action Robust Reinforcement Learning via Optimal Adversary Aware Policy Optimization[EB/OL].(2025-07-04)[2025-07-16].https://arxiv.org/abs/2507.03372.点此复制

评论