Directly Forecasting Belief for Reinforcement Learning with Delays
Directly Forecasting Belief for Reinforcement Learning with Delays
Reinforcement learning (RL) with delays is challenging as sensory perceptions lag behind the actual events: the RL agent needs to estimate the real state of its environment based on past observations. State-of-the-art (SOTA) methods typically employ recursive, step-by-step forecasting of states. This can cause the accumulation of compounding errors. To tackle this problem, our novel belief estimation method, named Directly Forecasting Belief Transformer (DFBT), directly forecasts states from observations without incrementally estimating intermediate states step-by-step. We theoretically demonstrate that DFBT greatly reduces compounding errors of existing recursively forecasting methods, yielding stronger performance guarantees. In experiments with D4RL offline datasets, DFBT reduces compounding errors with remarkable prediction accuracy. DFBT's capability to forecast state sequences also facilitates multi-step bootstrapping, thus greatly improving learning efficiency. On the MuJoCo benchmark, our DFBT-based method substantially outperforms SOTA baselines. Code is available at https://github.com/QingyuanWuNothing/DFBT.
Qingyuan Wu、Yuhui Wang、Simon Sinong Zhan、Yixuan Wang、Chung-Wei Lin、Chen Lv、Qi Zhu、Jürgen Schmidhuber、Chao Huang
计算技术、计算机技术
Qingyuan Wu,Yuhui Wang,Simon Sinong Zhan,Yixuan Wang,Chung-Wei Lin,Chen Lv,Qi Zhu,Jürgen Schmidhuber,Chao Huang.Directly Forecasting Belief for Reinforcement Learning with Delays[EB/OL].(2025-05-01)[2025-06-27].https://arxiv.org/abs/2505.00546.点此复制
评论