|国家预印本平台
首页|Moderate Actor-Critic Methods: Controlling Overestimation Bias via Expectile Loss

Moderate Actor-Critic Methods: Controlling Overestimation Bias via Expectile Loss

Moderate Actor-Critic Methods: Controlling Overestimation Bias via Expectile Loss

来源:Arxiv_logoArxiv
英文摘要

Overestimation is a fundamental characteristic of model-free reinforcement learning (MF-RL), arising from the principles of temporal difference learning and the approximation of the Q-function. To address this challenge, we propose a novel moderate target in the Q-function update, formulated as a convex optimization of an overestimated Q-function and its lower bound. Our primary contribution lies in the efficient estimation of this lower bound through the lower expectile of the Q-value distribution conditioned on a state. Notably, our moderate target integrates seamlessly into state-of-the-art (SOTA) MF-RL algorithms, including Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC). Experimental results validate the effectiveness of our moderate target in mitigating overestimation bias in DDPG, SAC, and distributional RL algorithms.

Ukjo Hwang、Songnam Hong

计算技术、计算机技术

Ukjo Hwang,Songnam Hong.Moderate Actor-Critic Methods: Controlling Overestimation Bias via Expectile Loss[EB/OL].(2025-04-14)[2025-05-15].https://arxiv.org/abs/2504.09929.点此复制

评论