|国家预印本平台
首页|Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning

Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning

Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning

来源:Arxiv_logoArxiv
英文摘要

This work presents a Hierarchical Multi-Agent Reinforcement Learning framework for analyzing simulated air combat scenarios involving heterogeneous agents. The objective is to identify effective Courses of Action that lead to mission success within preset simulations, thereby enabling the exploration of real-world defense scenarios at low cost and in a safe-to-fail setting. Applying deep Reinforcement Learning in this context poses specific challenges, such as complex flight dynamics, the exponential size of the state and action spaces in multi-agent systems, and the capability to integrate real-time control of individual units with look-ahead planning. To address these challenges, the decision-making process is split into two levels of abstraction: low-level policies control individual units, while a high-level commander policy issues macro commands aligned with the overall mission targets. This hierarchical structure facilitates the training process by exploiting policy symmetries of individual agents and by separating control from command tasks. The low-level policies are trained for individual combat control in a curriculum of increasing complexity. The high-level commander is then trained on mission targets given pre-trained control policies. The empirical validation confirms the advantages of the proposed framework.

Ardian Selmonaj、Oleg Szehr、Giacomo Del Rio、Alessandro Antonucci、Adrian Schneider、Michael Rüegsegger

战略、战役、战术航空航天技术

Ardian Selmonaj,Oleg Szehr,Giacomo Del Rio,Alessandro Antonucci,Adrian Schneider,Michael Rüegsegger.Enhancing Aerial Combat Tactics through Hierarchical Multi-Agent Reinforcement Learning[EB/OL].(2025-05-13)[2025-06-03].https://arxiv.org/abs/2505.08995.点此复制

评论