Networked Communication for Decentralised Agents in Mean-Field Games
Networked Communication for Decentralised Agents in Mean-Field Games
We introduce networked communication to the mean-field game framework, in particular to oracle-free settings where $N$ decentralised agents learn along a single, non-episodic run of the empirical system. We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases. We provide the order of the difference in these bounds in terms of network structure and number of communication rounds, and also contribute a policy-update stability guarantee. We discuss how the sample guarantees of the three theoretical algorithms do not actually result in practical convergence. We therefore show that in practical settings where the theoretical parameters are not observed (leading to poor estimation of the Q-function), our communication scheme considerably accelerates learning over the independent case, often performing similarly to a centralised learner while removing the restrictive assumption of the latter. We contribute further practical enhancements to all three theoretical algorithms, allowing us to present their first empirical demonstrations. Our experiments confirm that we can remove several of the theoretical assumptions of the algorithms, and display the empirical convergence benefits brought by our new networked communication. We additionally show that our networked approach has significant advantages over both alternatives in terms of robustness to update failures and to changes in population size.
Alessandro Abate、Patrick Benjamin
通信无线通信计算技术、计算机技术
Alessandro Abate,Patrick Benjamin.Networked Communication for Decentralised Agents in Mean-Field Games[EB/OL].(2023-06-05)[2025-05-12].https://arxiv.org/abs/2306.02766.点此复制
评论