Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation
Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation
Multirotors play a significant role in diverse field robotics applications but remain highly susceptible to actuator failures, leading to rapid instability and compromised mission reliability. While various fault-tolerant control (FTC) strategies using reinforcement learning (RL) have been widely explored, most previous approaches require prior knowledge of the multirotor model or struggle to adapt to new configurations. To address these limitations, we propose a novel hybrid RL-based FTC framework integrated with a transformer-based online adaptation module. Our framework leverages a transformer architecture to infer latent representations in real time, enabling adaptation to previously unseen system models without retraining. We evaluate our method in a PyBullet simulation under loss-of-effectiveness actuator faults, achieving a 95% success rate and a positional root mean square error (RMSE) of 0.129 m, outperforming existing adaptation methods with 86% success and an RMSE of 0.153 m. Further evaluations on quadrotors with varying configurations confirm the robustness of our framework across untrained dynamics. These results demonstrate the potential of our framework to enhance the adaptability and reliability of multirotors, enabling efficient fault management in dynamic and uncertain environments. Website is available at http://00dhkim.me/paper/rl-ftc
Dohyun Kim、Jayden Dongwoo Lee、Hyochoong Bang、Jungho Bae
航空航天技术航空计算技术、计算机技术
Dohyun Kim,Jayden Dongwoo Lee,Hyochoong Bang,Jungho Bae.Reinforcement Learning-based Fault-Tolerant Control for Quadrotor with Online Transformer Adaptation[EB/OL].(2025-05-13)[2025-06-07].https://arxiv.org/abs/2505.08223.点此复制
评论