Bridging Supervised and Temporal Difference Learning with $Q$-Conditioned Maximization
Bridging Supervised and Temporal Difference Learning with $Q$-Conditioned Maximization
Recently, supervised learning (SL) methodology has emerged as an effective approach for offline reinforcement learning (RL) due to their simplicity, stability, and efficiency. However, recent studies show that SL methods lack the trajectory stitching capability, typically associated with temporal difference (TD)-based approaches. A question naturally surfaces: How can we endow SL methods with stitching capability and bridge its performance gap with TD learning? To answer this question, we introduce $Q$-conditioned maximization supervised learning for offline goal-conditioned RL, which enhances SL with the stitching capability through $Q$-conditioned policy and $Q$-conditioned maximization. Concretely, we propose Goal-Conditioned Reinforced Supervised Learning (GCReinSL), which consists of (1) estimating the $Q$-function by CVAE from the offline dataset and (2) finding the maximum $Q$-value within the data support by integrating $Q$-function maximization with Expectile Regression. In inference time, our policy chooses optimal actions based on such a maximum $Q$-value. Experimental results from stitching evaluations on offline RL datasets demonstrate that our method outperforms prior SL approaches with stitching capabilities and goal data augmentation techniques.
Sheng Xu、Yunhao Luo、Fei Shen、Xuetao Zhang、Donglin Wang、Xing Lei、Zifeng Zhuang、Shentao Yang
计算技术、计算机技术
Sheng Xu,Yunhao Luo,Fei Shen,Xuetao Zhang,Donglin Wang,Xing Lei,Zifeng Zhuang,Shentao Yang.Bridging Supervised and Temporal Difference Learning with $Q$-Conditioned Maximization[EB/OL].(2025-05-31)[2025-07-20].https://arxiv.org/abs/2506.00795.点此复制
评论