Multi-Agent Reinforcement Learning (MARL) has seen revolutionary breakthroughs with its successful application to multi-agent cooperative tasks such as robot swarms control, autonomous vehicle coordination, and computer games. Recent works have applied the Proximal Policy Optimization (PPO) to the multi-agent tasks, called Multi-agent PPO (MAPPO). However, the MAPPO in current works lacks a theory to guarantee its convergence; and requires artificial agent-specific features, called MAPPO-agent-specific (MAPPO-AS). In addition, the performance of MAPPO-AS is still lower than the finetuned QMIX on the popular benchmark environment StarCraft Multi-agent Challenge (SMAC). In this paper, we firstly theoretically generalize PPO to MAPPO by a approximate lower bound of Trust Region Policy Optimization (TRPO), which guarantees its convergence. Secondly, since the centralized advantage value function in vanilla MAPPO may mislead the learning of some agents, which are not related to these advantage values, called \textit{The Policies Overfitting in Multi-agent Cooperation(POMAC)} problem. We propose the noisy credit assignment methods (Noisy-MAPPO and Advantage-Noisy-MAPPO) to solve it. The experimental results show that the average performance of Noisy-MAPPO is better than that of finetuned QMIX; Noisy-MAPPO is the first algorithm that achieves more than 90\% winning rates in all SMAC scenarios. We open-source the code at \url{https://github.com/hijkzzz/noisy-mappo}.
翻译:多机构强化学习(MARL)取得了革命性突破,成功应用到多机构合作任务,如机器人群群控、自动车辆协调、计算机游戏等。最近的工作对多机构任务应用了Proximal政策优化(PPO ) 。然而,当前工作的MAPO缺乏保证其趋同的理论;需要人造代理特质,称为MAPO-试剂(MAPO-AAS)。此外,MAPO-AS的性能仍然低于对流行基准环境StarCraft多机构挑战(SMAC)的调整QMIX。在本文中,我们首先从理论上将POPO普遍适用于多机构化任务,这是信任区域政策优化(MPO)的约低约束,保证其趋同性。第二,由于香拉MAPO的集中优势值功能可能误导一些与这些优势值无关的代理人的学习,称为Textit {在多机构级授权合作(Stary-MAPO) 的微额调整Q(SOMAC) 合作(PO-MAPA) 平级任务中,我们提议的信用平级平级平级平级平级平级平级平级平级平级平级平级平级平。