Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StarCraft Micromanagement with Reinforcement Learning and Curriculum Transfer Learning (1804.00810v1)

Published 3 Apr 2018 in cs.AI, cs.LG, and cs.MA

Abstract: Real-time strategy games have been an important field of game artificial intelligence in recent years. This paper presents a reinforcement learning and curriculum transfer learning method to control multiple units in StarCraft micromanagement. We define an efficient state representation, which breaks down the complexity caused by the large state space in the game environment. Then a parameter sharing multi-agent gradientdescent Sarsa({\lambda}) (PS-MAGDS) algorithm is proposed to train the units. The learning policy is shared among our units to encourage cooperative behaviors. We use a neural network as a function approximator to estimate the action-value function, and propose a reward function to help units balance their move and attack. In addition, a transfer learning method is used to extend our model to more difficult scenarios, which accelerates the training process and improves the learning performance. In small scale scenarios, our units successfully learn to combat and defeat the built-in AI with 100% win rates. In large scale scenarios, curriculum transfer learning method is used to progressively train a group of units, and shows superior performance over some baseline methods in target scenarios. With reinforcement learning and curriculum transfer learning, our units are able to learn appropriate strategies in StarCraft micromanagement scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Kun Shao (29 papers)
  2. Yuanheng Zhu (17 papers)
  3. Dongbin Zhao (62 papers)
Citations (161)

Summary

StarCraft Micromanagement with Reinforcement Learning and Curriculum Transfer Learning

This paper explores the application of reinforcement learning (RL) and curriculum transfer learning within the challenging domain of micromanagement in the real-time strategy game StarCraft. The authors focus on developing a sophisticated control strategy for multiple units, leveraging modern machine learning techniques to overcome complexities inherent in the game's environment.

StarCraft is widely recognized for its intricate gameplay, necessitating both strategic planning at a macro level and precise unit control at a micro level. These complexities render it an ideal testbed for AI research, especially in areas such as multi-agent collaboration, spatial reasoning, and adversarial planning.

Key Contributions

  1. Efficient State Representation: The authors introduce a concise and efficient state representation method that effectively handles large state spaces inherent in StarCraft. This representation considers units' attributes and distances, enabling flexible strategies with an arbitrary number of units.
  2. PS-MAGDS Algorithm: The paper proposes the Parameter Sharing Multi-Agent Gradient-Descent Sarsa(λ\lambda) algorithm, a reinforcement learning technique employing a neural network as function approximator. This algorithm facilitates the sharing of learning policy parameters among units, promoting cooperative behaviors during combat.
  3. Reward Function: A novel reward function is devised to incentivize units to balance movement and attack strategies, addressing issues related to sparse and delayed rewards typical of the game environment.
  4. Curriculum Transfer Learning: To enhance training efficiency and accelerate learning performance, a curriculum-based transfer learning approach is implemented. This method successfully extends the RL model to more challenging scenarios, with units quickly adapting to complex combat situations.

Experimental Results

The effectiveness of these methods is demonstrated through a series of experiments in both small-scale and large-scale StarCraft scenarios. Notably, in small-scale combat scenarios, units trained with the proposed techniques achieved a 100% win rate against the built-in StarCraft AI. In large-scale scenarios, curriculum transfer learning proved superior over baseline methods like the rule-based and zero-order optimization approaches, attaining impressive win rates and showcasing enhanced strategic acumen.

Implications and Future Research Directions

The implications of this research extend beyond StarCraft into broader applications of multi-agent systems and game AI. By addressing challenging aspects of multi-agent collaboration and strategic decision-making, the proposed methods have potential utility in areas such as robotics, autonomous systems, and complex simulations.

Future research may delve into refining multi-agent coordination through decentralized learning approaches, or exploring hierarchical reinforcement learning to tackle delayed reward mechanisms more robustly. Additionally, expanding the model to accommodate diverse unit types and scenarios could lead to further advancements in game AI sophistication.

Conclusion

This paper presents a structured approach for mastering micromanagement in StarCraft using reinforcement learning and curriculum transfer learning. The methodologies proposed exhibit strong potential in AI-driven unit control, evidenced by significant improvements in navigational strategy and combat performance. Through careful design and experimentation, the authors contribute valuable insights to the paper of game AI, paving the way for future innovations in strategic multi-agent environments.