|国家预印本平台
首页|Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies

Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies

Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies

来源:Arxiv_logoArxiv
英文摘要

Why do reinforcement learning (RL) policies fail or succeed? This is a challenging question due to the complex, high-dimensional nature of agent-environment interactions. In this work, we take a causal perspective on explaining the behavior of RL policies by viewing the states, actions, and rewards as variables in a low-level causal model. We introduce random perturbations to policy actions during execution and observe their effects on the cumulative reward, learning a simplified high-level causal model that explains these relationships. To this end, we develop a nonlinear Causal Model Reduction framework that ensures approximate interventional consistency, meaning the simplified high-level model responds to interventions in a similar way as the original complex system. We prove that for a class of nonlinear causal models, there exists a unique solution that achieves exact interventional consistency, ensuring learned explanations reflect meaningful causal patterns. Experiments on both synthetic causal models and practical RL tasks-including pendulum control and robot table tennis-demonstrate that our approach can uncover important behavioral patterns, biases, and failure modes in trained RL policies.

Armin Kekić、Jan Schneider、Dieter Büchler、Bernhard Schölkopf、Michel Besserve

计算技术、计算机技术

Armin Kekić,Jan Schneider,Dieter Büchler,Bernhard Schölkopf,Michel Besserve.Learning Nonlinear Causal Reductions to Explain Reinforcement Learning Policies[EB/OL].(2025-07-20)[2025-08-10].https://arxiv.org/abs/2507.14901.点此复制

评论