|国家预印本平台
首页|Normality-Guided Distributional Reinforcement Learning for Continuous Control

Normality-Guided Distributional Reinforcement Learning for Continuous Control

Normality-Guided Distributional Reinforcement Learning for Continuous Control

来源:Arxiv_logoArxiv
英文摘要

Learning a predictive model of the mean return, or value function, plays a critical role in many reinforcement learning algorithms. Distributional reinforcement learning (DRL) has been shown to improve performance by modeling the value distribution, not just the mean. We study the value distribution in several continuous control tasks and find that the learned value distribution is empirically quite close to normal. We design a method that exploits this property, employing variances predicted from a variance network, along with returns, to analytically compute target quantile bars representing a normal for our distributional value function. In addition, we propose a policy update strategy based on the correctness as measured by structural characteristics of the value distribution not present in the standard value function. The approach we outline is compatible with many DRL structures. We use two representative on-policy algorithms, PPO and TRPO, as testbeds. Our method yields statistically significant improvements in 10 out of 16 continuous task settings, while utilizing a reduced number of weights and achieving faster training time compared to an ensemble-based method for quantifying value distribution uncertainty.

Ju-Seung Byun、Andrew Perrault

计算技术、计算机技术

Ju-Seung Byun,Andrew Perrault.Normality-Guided Distributional Reinforcement Learning for Continuous Control[EB/OL].(2025-07-07)[2025-07-23].https://arxiv.org/abs/2208.13125.点此复制

评论