|国家预印本平台
首页|Strategic learning for disturbance rejection in multi-agent systems: Nash and Minmax in graphical games

Strategic learning for disturbance rejection in multi-agent systems: Nash and Minmax in graphical games

Strategic learning for disturbance rejection in multi-agent systems: Nash and Minmax in graphical games

来源:Arxiv_logoArxiv
英文摘要

This article investigates the optimal control problem with disturbance rejection for discrete-time multi-agent systems under cooperative and non-cooperative graphical games frameworks. Given the practical challenges of obtaining accurate models, Q-function-based policy iteration methods are proposed to seek the Nash equilibrium solution for the cooperative graphical game and the distributed minmax solution for the non-cooperative graphical game. To implement these methods online, two reinforcement learning frameworks are developed, an actor-disturber-critic structure for the cooperative graphical game and an actor-adversary-disturber-critic structure for the non-cooperative graphical game. The stability of the proposed methods is rigorously analyzed, and simulation results are provided to illustrate the effectiveness of the proposed methods.

Xinyang Wang、Martin Guay、Shimin Wang、Hongwei Zhang

自动化基础理论

Xinyang Wang,Martin Guay,Shimin Wang,Hongwei Zhang.Strategic learning for disturbance rejection in multi-agent systems: Nash and Minmax in graphical games[EB/OL].(2025-04-10)[2025-04-26].https://arxiv.org/abs/2504.07547.点此复制

评论