|国家预印本平台
首页|Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks

Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks

Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks

来源:Arxiv_logoArxiv
英文摘要

Meta-reinforcement learning trains a single reinforcement learning agent on a distribution of tasks to quickly generalize to new tasks outside of the training set at test time. From a Bayesian perspective, one can interpret this as performing amortized variational inference on the posterior distribution over training tasks. Among the various meta-reinforcement learning approaches, a common method is to represent this distribution with a point-estimate using a recurrent neural network. We show how one can augment this point estimate to give full distributions through the Laplace approximation, either at the start of, during, or after learning, without modifying the base model architecture. With our approximation, we are able to estimate distribution statistics (e.g., the entropy) of non-Bayesian agents and observe that point-estimate based methods produce overconfident estimators while not satisfying consistency. Furthermore, when comparing our approach to full-distribution based learning of the task posterior, our method performs on par with variational baselines while having much fewer parameters.

Joery A. de Vries、Jinke He、Mathijs M. de Weerdt、Matthijs T. J. Spaan

计算技术、计算机技术

Joery A. de Vries,Jinke He,Mathijs M. de Weerdt,Matthijs T. J. Spaan.Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks[EB/OL].(2025-05-24)[2025-06-21].https://arxiv.org/abs/2505.18591.点此复制

评论