Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Playing Full MOBA Games with Deep Reinforcement Learning (2011.12692v4)

Published 25 Nov 2020 in cs.AI and cs.LG

Abstract: MOBA games, e.g., Honor of Kings, League of Legends, and Dota 2, pose grand challenges to AI systems such as multi-agent, enormous state-action space, complex action control, etc. Developing AI for playing MOBA games has raised much attention accordingly. However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i.e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes. As a result, full MOBA games without restrictions are far from being mastered by any existing AI system. In this paper, we propose a MOBA AI learning paradigm that methodologically enables playing full MOBA games with deep reinforcement learning. Specifically, we develop a combination of novel and existing learning techniques, including curriculum self-play learning, policy distillation, off-policy adaption, multi-head value estimation, and Monte-Carlo tree-search, in training and playing a large pool of heroes, meanwhile addressing the scalability issue skillfully. Tested on Honor of Kings, a popular MOBA game, we show how to build superhuman AI agents that can defeat top esports players. The superiority of our AI is demonstrated by the first large-scale performance test of MOBA AI agent in the literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Deheng Ye (50 papers)
  2. Guibin Chen (14 papers)
  3. Wen Zhang (170 papers)
  4. Sheng Chen (133 papers)
  5. Bo Yuan (151 papers)
  6. Bo Liu (484 papers)
  7. Jia Chen (85 papers)
  8. Zhao Liu (97 papers)
  9. Fuhao Qiu (3 papers)
  10. Hongsheng Yu (3 papers)
  11. Yinyuting Yin (2 papers)
  12. Bei Shi (10 papers)
  13. Liang Wang (512 papers)
  14. Tengfei Shi (6 papers)
  15. Qiang Fu (159 papers)
  16. Wei Yang (349 papers)
  17. Lanxiao Huang (16 papers)
  18. Wei Liu (1135 papers)
Citations (171)

Summary

Understanding the AI Advancements in MOBA Game Playing

The paper "Towards Playing Full MOBA Games with Deep Reinforcement Learning" presents a substantial advancement in the application of AI systems to multiplayer online battle arena (MOBA) games, such as Honor of Kings, League of Legends, and Dota 2. These games involve complex multi-agent interactions, demanding strategic planning and execution from AI players, which has historically posed challenges due to the exponential complexity of the state-action spaces and numerous agent combinations.

Methodological Innovations

The authors introduce a MOBA AI learning paradigm based on deep reinforcement learning (DRL) that methodologically enables AI to master playing full MOBA games without the hero pool restrictions seen in previous works like OpenAI Five, which operated with only 17 heroes. This learning paradigm integrates several sophisticated machine learning techniques, namely:

  1. Curriculum Self-Play Learning (CSPL): This approach structures the learning process by using progressively complex tasks. It starts with fixed hero lineups to train smaller teacher models and gradually builds complexity by merging policies through multi-teacher policy distillation.
  2. Off-Policy Adaption: The AI system utilizes off-policy training to handle diverse game states and policy deviations, thereby maintaining stability and performance over long training horizons.
  3. Multi-Head Value Estimation (MHV): This method integrates a decomposition of game rewards into multiple heads to provide more nuanced value estimates, aiding AI agents in learning different aspects of game strategy more effectively.
  4. Monte-Carlo Tree Search (MCTS): To efficiently manage hero drafting with large pools, a neural network-based MCTS drafting agent is deployed, providing practical solutions to computational challenges posed by expansive hero pools.

Numerical Performance and Testing Scope

This AI system was evaluated through both professional matches and large-scale testing with the public:

  • Professional Matches: Over 42 matches against esports professionals showed an AI victory rate of 95.2%, demonstrating superhuman capabilities.
  • Public Matches: The AI achieved a 97.7% win rate over 642,047 games against top-ranking players, underscoring its effectiveness across diverse high-level strategies.

These tests mark a significant increase in scale compared to earlier AI evaluations on games like StarCraft and Dota, confirming the robustness and adaptability of the approach.

Implications and Future Directions

This research contributes substantial theoretical and practical insights into AI’s feasibility in solving complex strategic problems with real-time requirements. The innovations in DRL scalability are likely to transcend MOBA games, offering utility in robotics and other real-time collaborative systems. The work stimulates further research into efficient methods to scale AI learning processes, particularly in high-dimensional environments, and paves the way for a more complete understanding of multi-agent cooperation and competition dynamics.

The authors express intentions to extend the AI capabilities to all 101 heroes in Honor of Kings, further refining their approach for complete mastery of the game. Additionally, their methodology presents a foundation for developing subtasks that could benefit the broader AI community.

Overall, this paper provides a comprehensive view of the possibilities and challenges that come with mastering MOBA games using artificial intelligence, contributing a meaningful step forward in strategic game AI research.