|国家预印本平台
首页|Light Aircraft Game : Basic Implementation and training results analysis

Light Aircraft Game : Basic Implementation and training results analysis

Light Aircraft Game : Basic Implementation and training results analysis

来源:Arxiv_logoArxiv
英文摘要

This paper investigates multi-agent reinforcement learning (MARL) in a partially observable, cooperative-competitive combat environment known as LAG. We describe the environment's setup, including agent actions, hierarchical controls, and reward design across different combat modes such as No Weapon and ShootMissile. Two representative algorithms are evaluated: HAPPO, an on-policy hierarchical variant of PPO, and HASAC, an off-policy method based on soft actor-critic. We analyze their training stability, reward progression, and inter-agent coordination capabilities. Experimental results show that HASAC performs well in simpler coordination tasks without weapons, while HAPPO demonstrates stronger adaptability in more dynamic and expressive scenarios involving missile combat. These findings provide insights into the trade-offs between on-policy and off-policy methods in multi-agent settings.

Hanzhong Cao

军事技术

Hanzhong Cao.Light Aircraft Game : Basic Implementation and training results analysis[EB/OL].(2025-06-16)[2025-07-16].https://arxiv.org/abs/2506.14164.点此复制

评论