|国家预印本平台
首页|Diverse Exploration via Conjugate Policies for Policy Gradient Methods

Diverse Exploration via Conjugate Policies for Policy Gradient Methods

Diverse Exploration via Conjugate Policies for Policy Gradient Methods

来源:Arxiv_logoArxiv
英文摘要

We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.

Xiangrong Tong、Lei Yu、Andrew Cohen、Xingye Qiao、Elliot Way

计算技术、计算机技术

Xiangrong Tong,Lei Yu,Andrew Cohen,Xingye Qiao,Elliot Way.Diverse Exploration via Conjugate Policies for Policy Gradient Methods[EB/OL].(2019-02-10)[2025-08-02].https://arxiv.org/abs/1902.03633.点此复制

评论