Soft Actor-Critic with Backstepping-Pretrained DeepONet for control of PDEs
Soft Actor-Critic with Backstepping-Pretrained DeepONet for control of PDEs
This paper develops a reinforcement learning-based controller for the stabilization of partial differential equation (PDE) systems. Within the soft actor-critic (SAC) framework, we embed a DeepONet, a well-known neural operator (NO), which is pretrained using the backstepping controller. The pretrained DeepONet captures the essential features of the backstepping controller and serves as a feature extractor, replacing the convolutional neural networks (CNNs) layers in the original actor and critic networks, and directly connects to the fully connected layers of the SAC architecture. We apply this novel backstepping and reinforcement learning integrated method to stabilize an unstable ffrst-order hyperbolic PDE and an unstable reactiondiffusion PDE. Simulation results demonstrate that the proposed method outperforms the standard SAC, SAC with an untrained DeepONet, and the backstepping controller on both systems.
Chenchen Wang、Jie Qi、Jiaqi Hu
计算技术、计算机技术
Chenchen Wang,Jie Qi,Jiaqi Hu.Soft Actor-Critic with Backstepping-Pretrained DeepONet for control of PDEs[EB/OL].(2025-07-06)[2025-07-21].https://arxiv.org/abs/2507.04232.点此复制
评论