|国家预印本平台
首页|Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning

Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning

Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

Conditional decision generation with diffusion models has shown powerful competitiveness in reinforcement learning (RL). Recent studies reveal the relation between energy-function-guidance diffusion models and constrained RL problems. The main challenge lies in estimating the intermediate energy, which is intractable due to the log-expectation formulation during the generation process. To address this issue, we propose the Analytic Energy-guided Policy Optimization (AEPO). Specifically, we first provide a theoretical analysis and the closed-form solution of the intermediate guidance when the diffusion model obeys the conditional Gaussian transformation. Then, we analyze the posterior Gaussian distribution in the log-expectation formulation and obtain the target estimation of the log-expectation under mild assumptions. Finally, we train an intermediate energy neural network to approach the target estimation of log-expectation formulation. We apply our method in 30+ offline RL tasks to demonstrate the effectiveness of our method. Extensive experiments illustrate that our method surpasses numerous representative baselines in D4RL offline reinforcement learning benchmarks.

Shengchao Hu、Zhejian Yang、Lichao Sun、Li Shen、Hechang Chen、Dacheng Tao、Yi Chang、Sili Huang、Jifeng Hu

计算技术、计算机技术

Shengchao Hu,Zhejian Yang,Lichao Sun,Li Shen,Hechang Chen,Dacheng Tao,Yi Chang,Sili Huang,Jifeng Hu.Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning[EB/OL].(2025-05-03)[2025-06-29].https://arxiv.org/abs/2505.01822.点此复制

评论