|国家预印本平台
首页|Maximum Entropy Deep Inverse Reinforcement Learning

Maximum Entropy Deep Inverse Reinforcement Learning

Maximum Entropy Deep Inverse Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

This paper presents a general framework for exploiting the representational capacity of neural networks to approximate complex, nonlinear reward functions in the context of solving the inverse reinforcement learning (IRL) problem. We show in this context that the Maximum Entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. At test time, the approach leads to a computational complexity independent of the number of demonstrations, which makes it especially well-suited for applications in life-long learning scenarios. Our approach achieves performance commensurate to the state-of-the-art on existing benchmarks while exceeding on an alternative benchmark based on highly varying reward structures. Finally, we extend the basic architecture - which is equivalent to a simplified subclass of Fully Convolutional Neural Networks (FCNNs) with width one - to include larger convolutions in order to eliminate dependency on precomputed spatial features and work on raw input representations.

Ingmar Posner、Peter Ondruska、Markus Wulfmeier

计算技术、计算机技术

Ingmar Posner,Peter Ondruska,Markus Wulfmeier.Maximum Entropy Deep Inverse Reinforcement Learning[EB/OL].(2015-07-17)[2025-07-02].https://arxiv.org/abs/1507.04888.点此复制

评论