Addressing Maximization Bias in Reinforcement Learning with Two-Sample
Testing
Martin Waltz Ostap Okhrin
作者信息
Abstract
Value-based reinforcement-learning algorithms have shown strong results in
games, robotics, and other real-world applications. Overestimation bias is a
known threat to those algorithms and can sometimes lead to dramatic performance
decreases or even complete algorithmic failure. We frame the bias problem
statistically and consider it an instance of estimating the maximum expected
value (MEV) of a set of random variables. We propose the $T$-Estimator (TE)
based on two-sample testing for the mean, that flexibly interpolates between
over- and underestimation by adjusting the significance level of the underlying
hypothesis tests. We also introduce a generalization, termed $K$-Estimator
(KE), that obeys the same bias and variance bounds as the TE and relies on a
nearly arbitrary kernel function. We introduce modifications of $Q$-Learning
and the Bootstrapped Deep $Q$-Network (BDQN) using the TE and the KE, and prove
convergence in the tabular setting. Furthermore, we propose an adaptive variant
of the TE-based BDQN that dynamically adjusts the significance level to
minimize the absolute estimation bias. All proposed estimators and algorithms
are thoroughly tested and validated on diverse tasks and environments,
illustrating the bias control and performance potential of the TE and KE.引用本文复制引用
Martin Waltz,Ostap Okhrin.Addressing Maximization Bias in Reinforcement Learning with Two-Sample
Testing[EB/OL].(2022-01-20)[2026-04-12].https://arxiv.org/abs/2201.08078.学科分类
计算技术、计算机技术
评论