Interpretable Imitation Learning via Generative Adversarial STL Inference and Control
Interpretable Imitation Learning via Generative Adversarial STL Inference and Control
Imitation learning methods have demonstrated considerable success in teaching autonomous systems complex tasks through expert demonstrations. However, a limitation of these methods is their lack of interpretability, particularly in understanding the specific task the learning agent aims to accomplish. In this paper, we propose a novel imitation learning method that combines Signal Temporal Logic (STL) inference and control synthesis, enabling the explicit representation of the task as an STL formula. This approach not only provides a clear understanding of the task but also supports the integration of human knowledge and allows for adaptation to out-of-distribution scenarios by manually adjusting the STL formulas and fine-tuning the policy. We employ a Generative Adversarial Network (GAN)-inspired approach to train both the inference and policy networks, effectively narrowing the gap between expert and learned policies. The efficiency of our algorithm is demonstrated through simulations, showcasing its practical applicability and adaptability.
Wenliang Liu、Danyang Li、Calin Belta、Erfan Aasi、Daniela Rus、Roberto Tron
自动化基础理论自动化技术、自动化技术设备
Wenliang Liu,Danyang Li,Calin Belta,Erfan Aasi,Daniela Rus,Roberto Tron.Interpretable Imitation Learning via Generative Adversarial STL Inference and Control[EB/OL].(2025-07-18)[2025-08-10].https://arxiv.org/abs/2402.10310.点此复制
评论