|国家预印本平台
首页|Dynamic Graph Communication for Decentralised Multi-Agent Reinforcement Learning

Dynamic Graph Communication for Decentralised Multi-Agent Reinforcement Learning

Dynamic Graph Communication for Decentralised Multi-Agent Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

This work presents a novel communication framework for decentralized multi-agent systems operating in dynamic network environments. Integrated into a multi-agent reinforcement learning system, the framework is designed to enhance decision-making by optimizing the network's collective knowledge through efficient communication. Key contributions include adapting a static network packet-routing scenario to a dynamic setting with node failures, incorporating a graph attention network layer in a recurrent message-passing framework, and introducing a multi-round communication targeting mechanism. This approach enables an attention-based aggregation mechanism to be successfully trained within a sparse-reward, dynamic network packet-routing environment using only reinforcement learning. Experimental results show improvements in routing performance, including a 9.5 percent increase in average rewards and a 6.4 percent reduction in communication overhead compared to a baseline system. The study also examines the ethical and legal implications of deploying such systems in critical infrastructure and military contexts, identifies current limitations, and suggests potential directions for future research.

Ben McClusky

通信自动化技术、自动化技术设备计算技术、计算机技术

Ben McClusky.Dynamic Graph Communication for Decentralised Multi-Agent Reinforcement Learning[EB/OL].(2024-12-30)[2025-08-24].https://arxiv.org/abs/2501.00165.点此复制

评论