|国家预印本平台
首页|Bi-Level Policy Optimization with Nystr\"om Hypergradients

Bi-Level Policy Optimization with Nystr\"om Hypergradients

Bi-Level Policy Optimization with Nystr\"om Hypergradients

来源:Arxiv_logoArxiv
英文摘要

The dependency of the actor on the critic in actor-critic (AC) reinforcement learning means that AC can be characterized as a bilevel optimization (BLO) problem, also called a Stackelberg game. This characterization motivates two modifications to vanilla AC algorithms. First, the critic's update should be nested to learn a best response to the actor's policy. Second, the actor should update according to a hypergradient that takes changes in the critic's behavior into account. Computing this hypergradient involves finding an inverse Hessian vector product, a process that can be numerically unstable. We thus propose a new algorithm, Bilevel Policy Optimization with Nystr\"om Hypergradients (BLPO), which uses nesting to account for the nested structure of BLO, and leverages the Nystr\"om method to compute the hypergradient. Theoretically, we prove BLPO converges to (a point that satisfies the necessary conditions for) a local strong Stackelberg equilibrium in polynomial time with high probability, assuming a linear parametrization of the critic's objective. Empirically, we demonstrate that BLPO performs on par with or better than PPO on a variety of discrete and continuous control tasks.

Arjun Prakash、Naicheng He、Denizalp Goktas、Amy Greenwald

计算技术、计算机技术

Arjun Prakash,Naicheng He,Denizalp Goktas,Amy Greenwald.Bi-Level Policy Optimization with Nystr\"om Hypergradients[EB/OL].(2025-05-16)[2025-07-16].https://arxiv.org/abs/2505.11714.点此复制

评论