Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hierarchical Macro Strategy Model for MOBA Game AI (1812.07887v1)

Published 19 Dec 2018 in cs.MA and cs.AI

Abstract: The next challenge of game AI lies in Real Time Strategy (RTS) games. RTS games provide partially observable gaming environments, where agents interact with one another in an action space much larger than that of GO. Mastering RTS games requires both strong macro strategies and delicate micro level execution. Recently, great progress has been made in micro level execution, while complete solutions for macro strategies are still lacking. In this paper, we propose a novel learning-based Hierarchical Macro Strategy model for mastering MOBA games, a sub-genre of RTS games. Trained by the Hierarchical Macro Strategy model, agents explicitly make macro strategy decisions and further guide their micro level execution. Moreover, each of the agents makes independent strategy decisions, while simultaneously communicating with the allies through leveraging a novel imitated cross-agent communication mechanism. We perform comprehensive evaluations on a popular 5v5 Multiplayer Online Battle Arena (MOBA) game. Our 5-AI team achieves a 48% winning rate against human player teams which are ranked top 1% in the player ranking system.

Hierarchical Macro Strategy Model for MOBA Game AI

This paper presents a Hierarchical Macro Strategy (HMS) model aimed at improving AI performance in Multiplayer Online Battle Arena (MOBA) games, specifically addressing challenges in Real Time Strategy (RTS) game environments. The research underscores the necessity of mastering macro level strategies alongside micro level execution, a domain where existing AI, such as OpenAI Five, has shown limitations despite success in micro-level skills.

The proposed HMS model leverages a learning-based hierarchical approach, integrating macro strategy decision-making with micro-level execution guidance. A distinctive feature is the imitated cross-agent communication mechanism, enabling independent yet coordinated strategy decisions among AI agents. This model was rigorously tested in a popular MOBA game setting, achieving a 48% win rate against top-ranked human players, illustrating competitive performance and effective strategy coordination.

Computational Complexity and MOBA Game Challenges

The paper emphasizes the computational complexity and multi-agent challenges present in MOBA games. MOBA environments exhibit a vast action and state space greatly exceeding that of GO, due to the complexity and number of strategic variables involved. The introduction highlights four key aspects contributing to this complexity: the vastness of action/state space, multi-agent coordination, imperfect information due to game environment factors like "fog of war", and sparse/delayed rewards inherent in the game's length and dynamics.

Hierarchical Model Structure

The HMS model features a two-layer architecture:

  1. Phase Recognition Layer: This layer models game phases, capturing critical periods within the game that dictate broader strategic considerations, such as laning, ganking, and mid-to-late phase dynamics. The approach to phase recognition correlates strongly with major in-game resources, like turrets and base objectives, thereby allowing for phase-specific strategy modeling.
  2. Attention Prediction Layer: Focused on predicting strategic hotspots on the game map, this layer helps agents decide optimal positions and maneuvers for resource control and tactical advantage. Through this distribution, AI agents can prioritize game areas corresponding to immediate strategic gain.

Both layers work cohesively to reduce computational complexity while providing structured guidance for micro-level execution. The innovative use of the imitated cross-agent communication mechanism strengthens agent cooperation, simulating the tactical communication observed in human players.

Experimental Results

Extensive experiments revealed that the HMS model captures effective macro strategies, distinguishing strategic phases and allocating attention efficiently across the map. The AI demonstrated superior performance compared to models without integrated macro strategy layers or cross-agent communication. Results highlighted the importance of the macro strategy in facilitating competitive edge over top 1% ranked human teams, showing a nuanced ability to push lanes and optimize strategic deployments akin to experienced human playstyles.

Implications and Future Work

The results suggest significant implications for RTS game AI, opening pathways for extending HMS strategies to other complex, multi-agent domains like StarCraft and robotic soccer. The structured approach provides a baseline policy for further reinforcement learning and advanced planning, potentially integrating Monte Carlo tree search for foresighted strategy augmentation.

Future work may focus on incorporating adaptive planning based on HMS foundations, exploring the effective assimilation of planning techniques for complex game environments where partial observability and strategic foresight are critical. The paper establishes a promising direction for refining AI strategies at both macro and micro levels, with potential impacts beyond gaming into fields requiring strategic coordination among intelligent agents.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Bin Wu (202 papers)
  2. Qiang Fu (159 papers)
  3. Jing Liang (89 papers)
  4. Peng Qu (18 papers)
  5. Xiaoqian Li (10 papers)
  6. Liang Wang (512 papers)
  7. Wei Liu (1135 papers)
  8. Wei Yang (349 papers)
  9. Yongsheng Liu (5 papers)
Citations (61)
Youtube Logo Streamline Icon: https://streamlinehq.com