|国家预印本平台
首页|VIPO: Value Function Inconsistency Penalized Offline Reinforcement Learning

VIPO: Value Function Inconsistency Penalized Offline Reinforcement Learning

VIPO: Value Function Inconsistency Penalized Offline Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Offline reinforcement learning (RL) learns effective policies from pre-collected datasets, offering a practical solution for applications where online interactions are risky or costly. Model-based approaches are particularly advantageous for offline RL, owing to their data efficiency and generalizability. However, due to inherent model errors, model-based methods often artificially introduce conservatism guided by heuristic uncertainty estimation, which can be unreliable. In this paper, we introduce VIPO, a novel model-based offline RL algorithm that incorporates self-supervised feedback from value estimation to enhance model training. Specifically, the model is learned by additionally minimizing the inconsistency between the value learned directly from the offline data and the one estimated from the model. We perform comprehensive evaluations from multiple perspectives to show that VIPO can learn a highly accurate model efficiently and consistently outperform existing methods. It offers a general framework that can be readily integrated into existing model-based offline RL algorithms to systematically enhance model accuracy. As a result, VIPO achieves state-of-the-art performance on almost all tasks in both D4RL and NeoRL benchmarks.

Xuyang Chen、Guojian Wang、Keyu Yan、Lin Zhao

计算技术、计算机技术

Xuyang Chen,Guojian Wang,Keyu Yan,Lin Zhao.VIPO: Value Function Inconsistency Penalized Offline Reinforcement Learning[EB/OL].(2025-04-16)[2025-07-16].https://arxiv.org/abs/2504.11944.点此复制

评论