Towards Unsupervised Multi-Agent Reinforcement Learning via Task-Agnostic Exploration
Towards Unsupervised Multi-Agent Reinforcement Learning via Task-Agnostic Exploration
In reinforcement learning, we typically refer to unsupervised pre-training when we aim to pre-train a policy without a priori access to the task specification, i.e. rewards, to be later employed for efficient learning of downstream tasks. In single-agent settings, the problem has been extensively studied and mostly understood. A popular approach, called task-agnostic exploration, casts the unsupervised objective as maximizing the entropy of the state distribution induced by the agent's policy, from which principles and methods follow. In contrast, little is known about it in multi-agent settings, which are ubiquitous in the real world. What are the pros and cons of alternative problem formulations in this setting? How hard is the problem in theory, how can we solve it in practice? In this paper, we address these questions by first characterizing those alternative formulations and highlighting how the problem, even when tractable in theory, is non-trivial in practice. Then, we present a scalable, decentralized, trust-region policy search algorithm to address the problem in practical settings. Finally, we provide numerical validations to both corroborate the theoretical findings and pave the way for unsupervised multi-agent reinforcement learning via task-agnostic exploration in challenging domains, showing that optimizing for a specific objective, namely mixture entropy, provides an excellent trade-off between tractability and performances.
Riccardo Zamboni、Mirco Mutti、Marcello Restelli
计算技术、计算机技术
Riccardo Zamboni,Mirco Mutti,Marcello Restelli.Towards Unsupervised Multi-Agent Reinforcement Learning via Task-Agnostic Exploration[EB/OL].(2025-06-24)[2025-07-16].https://arxiv.org/abs/2502.08365.点此复制
评论