Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-level performance in first-person multiplayer games with population-based deep reinforcement learning (1807.01281v1)

Published 3 Jul 2018 in cs.LG, cs.AI, and stat.ML
Human-level performance in first-person multiplayer games with population-based deep reinforcement learning

Abstract: Recent progress in artificial intelligence through reinforcement learning (RL) has shown great success on increasingly complex single-agent environments and two-player turn-based games. However, the real-world contains multiple agents, each learning and acting independently to cooperate and compete with other agents, and environments reflecting this degree of complexity remain an open challenge. In this work, we demonstrate for the first time that an agent can achieve human-level in a popular 3D multiplayer first-person video game, Quake III Arena Capture the Flag, using only pixels and game points as input. These results were achieved by a novel two-tier optimisation process in which a population of independent RL agents are trained concurrently from thousands of parallel matches with agents playing in teams together and against each other on randomly generated environments. Each agent in the population learns its own internal reward signal to complement the sparse delayed reward from winning, and selects actions using a novel temporally hierarchical representation that enables the agent to reason at multiple timescales. During game-play, these agents display human-like behaviours such as navigating, following, and defending based on a rich learned representation that is shown to encode high-level game knowledge. In an extensive tournament-style evaluation the trained agents exceeded the win-rate of strong human players both as teammates and opponents, and proved far stronger than existing state-of-the-art agents. These results demonstrate a significant jump in the capabilities of artificial agents, bringing us closer to the goal of human-level intelligence.

Human-level Performance in Multiplayer Games Through Population-based Deep RL

The paper "Human-level performance in first-person multiplayer games with population-based deep reinforcement learning" addresses the challenge of creating artificial agents capable of competing at human levels in complex, multi-agent environments. The research focuses on a variant of the game Quake III Arena, specifically the Capture the Flag (CTF) mode, where multiple agents must learn concurrently to cooperate and compete in randomly generated environments.

Methodology

The authors employ a novel two-tiered optimization process using population-based deep reinforcement learning (RL) to train a collection of agents. This setup involves:

  1. Concurrent Training: Agents are trained simultaneously across thousands of parallel game simulations, enabling a diverse array of interactions and experiences.
  2. Internal Reward Systems: Each agent learns an individual reward signal that supplements the sparse winning rewards, promoting reliable skill acquisition over time.
  3. Hierarchical Temporal Representation: A new temporally hierarchical model lets agents make decisions over various timescales, optimizing short-term actions with long-term strategies.

These agents, drawing input solely from raw pixels and game points, eventually demonstrate strategic behaviors typically associated with human players, such as navigation and defensive tactics.

Results

The paper reports strong performance where trained agents surpassed the win rates of human players in zero-shot generalization scenarios across procedurally generated maps. In controlled tournaments under diverse game conditions, agents achieved a high Elo rating, suggesting superior strategic play compared to human counterparts. Moreover, these agents demonstrated compatibility when paired with new teammates—including humans—showing an ability to adapt to unknown strategies and players.

Implications

This research advances the understanding of RL in high-dimensional, multi-agent environments. By employing population-based training and internal reward structures, the authors provide a framework that addresses critical issues in multi-agent RL: stability, scalability, and generalization. This approach has implications for various domains where autonomous systems must operate collaboratively or competitively without predefined models or human guidance.

Future Developments

The findings prompt several areas for future exploration:

  1. Population Diversity: Developing techniques to maintain and enrich diversity within agent populations could enhance learning adaptability and robustness.
  2. Meta-Optimization: Refining meta-optimization strategies such as Population Based Training (PBT) for more efficient exploration-exploration trade-offs.
  3. Temporal Credit Assignment: Improving methods for more precise temporal credit assignment could further enhance learning rate and efficiency.

Overall, this work contributes to bridging the gap towards achieving human-level intelligence in artificial agents by leveraging a sophisticated architecture and training approach in complex, multi-agent settings. The techniques and insights derived could potentially be applied across other competitive, dynamic, and cooperative domains beyond gaming.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Max Jaderberg (26 papers)
  2. Wojciech M. Czarnecki (15 papers)
  3. Iain Dunning (10 papers)
  4. Luke Marris (23 papers)
  5. Guy Lever (18 papers)
  6. Antonio Garcia Castaneda (4 papers)
  7. Charles Beattie (8 papers)
  8. Neil C. Rabinowitz (11 papers)
  9. Ari S. Morcos (31 papers)
  10. Avraham Ruderman (6 papers)
  11. Nicolas Sonnerat (10 papers)
  12. Tim Green (7 papers)
  13. Louise Deason (1 paper)
  14. Joel Z. Leibo (70 papers)
  15. David Silver (67 papers)
  16. Demis Hassabis (41 papers)
  17. Koray Kavukcuoglu (57 papers)
  18. Thore Graepel (48 papers)
Citations (696)
Youtube Logo Streamline Icon: https://streamlinehq.com