|国家预印本平台
首页|TVDO: Tchebycheff Value-Decomposition Optimization for Multi-Agent Reinforcement Learning

TVDO: Tchebycheff Value-Decomposition Optimization for Multi-Agent Reinforcement Learning

TVDO: Tchebycheff Value-Decomposition Optimization for Multi-Agent Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

In cooperative multiagent reinforcement learning (MARL), centralized training with decentralized execution (CTDE) has recently attracted more attention due to the physical demand. However, the most dilemma therein is the inconsistency between jointly-trained policies and individually-executed actions. In this article, we propose a factorized Tchebycheff value-decomposition optimization (TVDO) method to overcome the trouble of inconsistency. In particular, a nonlinear Tchebycheff aggregation function is formulated to realize the global optimum by tightly constraining the upper bound of individual action-value bias, which is inspired by the Tchebycheff method of multi-objective optimization. We theoretically prove that, under no extra limitations, the factorized value decomposition with Tchebycheff aggregation satisfies the sufficiency and necessity of Individual-Global-Max (IGM), which guarantees the consistency between the global and individual optimal action-value function. Empirically, in the climb and penalty game, we verify that TVDO precisely expresses the global-to-individual value decomposition with a guarantee of policy consistency. Meanwhile, we evaluate TVDO in the SMAC benchmark, and extensive experiments demonstrate that TVDO achieves a significant performance superiority over some SOTA MARL baselines.

Xiaoliang Hu、Pengcheng Guo、Yadong Li、Guanyu Li、Zhen Cui、Jian Yang

10.1109/TNNLS.2024.3455422

计算技术、计算机技术

Xiaoliang Hu,Pengcheng Guo,Yadong Li,Guanyu Li,Zhen Cui,Jian Yang.TVDO: Tchebycheff Value-Decomposition Optimization for Multi-Agent Reinforcement Learning[EB/OL].(2025-08-05)[2025-08-16].https://arxiv.org/abs/2306.13979.点此复制

评论