|国家预印本平台
首页|Signal attenuation enables scalable decentralized multi-agent reinforcement learning over networks

Signal attenuation enables scalable decentralized multi-agent reinforcement learning over networks

Signal attenuation enables scalable decentralized multi-agent reinforcement learning over networks

来源:Arxiv_logoArxiv
英文摘要

Multi-agent reinforcement learning (MARL) methods typically require that agents enjoy global state observability, preventing development of decentralized algorithms and limiting scalability. Recent work has shown that, under assumptions on decaying inter-agent influence, global observability can be replaced by local neighborhood observability at each agent, enabling decentralization and scalability. Real-world applications enjoying such decay properties remain underexplored, however, despite the fact that signal power decay, or signal attenuation, due to path loss is an intrinsic feature of many problems in wireless communications and radar networks. In this paper, we show that signal attenuation enables decentralization in MARL by considering the illustrative special case of performing power allocation for target detection in a radar network. To achieve this, we propose two new constrained multi-agent Markov decision process formulations of this power allocation problem, derive local neighborhood approximations for global value function and policy gradient estimates and establish corresponding error bounds, and develop decentralized saddle point policy gradient algorithms for solving the proposed problems. Our approach, though oriented towards the specific radar network problem we consider, provides a useful model for extensions to additional problems in wireless communications and radar networks.

Wesley A Suttle、Vipul K Sharma、Brian M Sadler

无线通信雷达通信

Wesley A Suttle,Vipul K Sharma,Brian M Sadler.Signal attenuation enables scalable decentralized multi-agent reinforcement learning over networks[EB/OL].(2025-05-16)[2025-06-06].https://arxiv.org/abs/2505.11461.点此复制

评论