Context-Aware Safe Reinforcement Learning for Non-Stationary Environments
Context-Aware Safe Reinforcement Learning for Non-Stationary Environments
Safety is a critical concern when deploying reinforcement learning agents for realistic tasks. Recently, safe reinforcement learning algorithms have been developed to optimize the agent's performance while avoiding violations of safety constraints. However, few studies have addressed the non-stationary disturbances in the environments, which may cause catastrophic outcomes. In this paper, we propose the context-aware safe reinforcement learning (CASRL) method, a meta-learning framework to realize safe adaptation in non-stationary environments. We use a probabilistic latent variable model to achieve fast inference of the posterior environment transition distribution given the context data. Safety constraints are then evaluated with uncertainty-aware trajectory sampling. The high cost of safety violations leads to the rareness of unsafe records in the dataset. We address this issue by enabling prioritized sampling during model training and formulating prior safety constraints with domain knowledge during constrained planning. The algorithm is evaluated in realistic safety-critical environments with non-stationary disturbances. Results show that the proposed algorithm significantly outperforms existing baselines in terms of safety and robustness.
Jiacheng Zhu、Baiming Chen、Ding Zhao、Zuxin Liu、Wenhao Ding、Mengdi Xu
安全科学计算技术、计算机技术自动化技术、自动化技术设备
Jiacheng Zhu,Baiming Chen,Ding Zhao,Zuxin Liu,Wenhao Ding,Mengdi Xu.Context-Aware Safe Reinforcement Learning for Non-Stationary Environments[EB/OL].(2021-01-02)[2025-08-02].https://arxiv.org/abs/2101.00531.点此复制
评论