Effective Reinforcement Learning Control using Conservative Soft Actor-Critic
Effective Reinforcement Learning Control using Conservative Soft Actor-Critic
Reinforcement Learning (RL) has shown great potential in complex control tasks, particularly when combined with deep neural networks within the Actor-Critic (AC) framework. However, in practical applications, balancing exploration, learning stability, and sample efficiency remains a significant challenge. Traditional methods such as Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO) address these issues by incorporating entropy or relative entropy regularization, but often face problems of instability and low sample efficiency. In this paper, we propose the Conservative Soft Actor-Critic (CSAC) algorithm, which seamlessly integrates entropy and relative entropy regularization within the AC framework. CSAC improves exploration through entropy regularization while avoiding overly aggressive policy updates with the use of relative entropy regularization. Evaluations on benchmark tasks and real-world robotic simulations demonstrate that CSAC offers significant improvements in stability and efficiency over existing methods. These findings suggest that CSAC provides strong robustness and application potential in control tasks under dynamic environments.
Xinyi Yuan、Zhiwei Shang、Wenjun Huang、Yunduan Cui、Di Chen、Meixin Zhu
自动化基础理论自动化技术、自动化技术设备
Xinyi Yuan,Zhiwei Shang,Wenjun Huang,Yunduan Cui,Di Chen,Meixin Zhu.Effective Reinforcement Learning Control using Conservative Soft Actor-Critic[EB/OL].(2025-05-06)[2025-05-21].https://arxiv.org/abs/2505.03356.点此复制
评论