Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PlotMap: Automated Layout Design for Building Game Worlds (2309.15242v4)

Published 26 Sep 2023 in cs.AI

Abstract: World-building, the process of developing both the narrative and physical world of a game, plays a vital role in the game's experience. Critically-acclaimed independent and AAA video games are praised for strong world-building, with game maps that masterfully intertwine with and elevate the narrative, captivating players and leaving a lasting impression. However, designing game maps that support a desired narrative is challenging, as it requires satisfying complex constraints from various considerations. Most existing map generation methods focus on considerations about gameplay mechanics or map topography, while the need to support the story is typically neglected. As a result, extensive manual adjustment is still required to design a game world that facilitates particular stories. In this work, we approach this problem by introducing an extra layer of plot facility layout design that is independent of the underlying map generation method in a world-building pipeline. Concretely, we define (plot) facility layout tasks as the tasks of assigning concrete locations on a game map to abstract locations mentioned in a given story (plot facilities), following spatial constraints derived from the story. We present two methods for solving these tasks automatically: an evolutionary computation based approach through Covariance Matrix Adaptation Evolution Strategy (CMA-ES), and a Reinforcement Learning (RL) based approach. We develop a method of generating datasets of facility layout tasks, create a gym-like environment for experimenting with and evaluating different methods, and further analyze the two methods with comprehensive experiments, aiming to provide insights for solving facility layout tasks. We will release the code and a dataset containing 10, 000 tasks of different scales.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528 (2019).
  2. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47 (2013), 253–279.
  3. Augmenting automated game testing with deep reinforcement learning. In 2020 IEEE Conference on Games (CoG). IEEE, 600–603.
  4. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019).
  5. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
  6. Emergent collective intelligence from massive-agent cooperation and competition. arXiv preprint arXiv:2301.01609 (2023).
  7. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017).
  8. Danesh: Interactive tools for understanding procedural content generators. IEEE Transactions on Games 14, 3 (2021), 329–338.
  9. A.I. Design. 1980. Rogue.
  10. Polygonal Map Generation for Games. https://github.com/TheFebrin/Polygonal-Map-Generation-for-Games.
  11. Joris Dormans and Sander Bakkes. 2011. Generating missions and spaces for adaptable play experiences. IEEE Transactions on Computational Intelligence and AI in Games 3, 3 (2011), 216–228.
  12. Game Freak. 2022. Pokémon Legends: Arceus.
  13. FromSoftware. 2022. Elden Ring.
  14. Toward supporting stories with procedurally generated game worlds. In 2011 IEEE Conference on Computational Intelligence and Games (CIG’11). IEEE, 297–304.
  15. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
  16. Procedural content generation for games: A survey. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 9, 1 (2013), 1–22.
  17. Reinforcement Learning Agents for Ubisoft’s Roller Champions. arXiv preprint arXiv:2012.06031 (2020).
  18. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 6443 (2019), 859–865.
  19. George Kelly and Hugh McCabe. 2017. A Survey of Procedural Techniques for City Generation. The ITB Journal 7, 2 (May 2017). https://doi.org/10.21427/D76M9P
  20. Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory 28, 2 (1982), 129–137.
  21. Ryu Matsumoto. 2022. Introduction of Case Studies of Engineer Efforts to Respond to the Open Field of Elden Ring. Computer Entertainment Developers Conference (2022).
  22. Nintendo. 1986. The Legend of Zelda.
  23. Nintendo. 2017. The Legend of Zelda: Breath of the Wild.
  24. Amit Patel. 2010. Polygonal map generation for games. Red Blob Games 4 (2010).
  25. Automatic Curriculum Learning For Deep RL: A Short Survey. In IJCAI 2020-International Joint Conference on Artificial Intelligence.
  26. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748–8763.
  27. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
  28. Sebastian Risi and Julian Togelius. 2020. Increasing generality in machine learning through procedural content generation. Nature Machine Intelligence 2, 8 (2020), 428–436.
  29. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
  30. A Survey of Procedural Methods for Terrain Modelling.
  31. Tanagra: A mixed-initiative level design tool. In Proceedings of the Fifth International Conference on the Foundations of Digital Games. 209–216.
  32. Open-ended learning leads to generally capable agents. arXiv preprint arXiv:2107.12808 (2021).
  33. Search-based procedural content generation: A taxonomy and survey. IEEE Transactions on Computational Intelligence and AI in Games 3, 3 (2011), 172–186.
  34. Towards story-based content generation: From plot-points to maps. In 2013 IEEE Conference on Computational Inteligence in Games (CIG). IEEE, 1–8.
  35. Procedural Generation of Dungeons. IEEE Transactions on Computational Intelligence and AI in Games 6, 1 (2014), 78–89. https://doi.org/10.1109/TCIAIG.2013.2290371
  36. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 7782 (2019), 350–354.
  37. Mixed-initiative co-creativity. In International Conference on Foundations of Digital Games.
  38. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893 (2018).

Summary

  • The paper introduces a novel plot facility layout design that integrates narrative constraints into automated game world maps using reinforcement learning.
  • It employs a dataset of 10,000 layout tasks alongside a Gym-like RL environment to train and assess the effectiveness of the design approach.
  • The RL-based method demonstrates scalability and adaptability in meeting complex spatial and narrative requirements while reducing manual design effort.

PlotMap: Automated Layout Design for Building Game Worlds

This essay provides an analytical review of "PlotMap: Automated Layout Design for Building Game Worlds", a paper that proposes a Reinforcement Learning (RL)-based system for automating the layout design of game worlds. The paper introduces the concept of plot facility layout design to integrate narrative spatial constraints into the map design of video game worlds, a task that previously required significant human effort and expertise.

Summary of Contributions

The paper's contributions are threefold:

  1. Introduction of Plot Facility Layout Design: The authors present a unique approach focused on plot facility layout design, which is adaptable and stands independent from traditional game map generation methods. This approach allows the creation of cohesive, narrative-driven game environments without being bound by the constraints of existing techniques.
  2. Dataset and Evaluation Environment: The authors have prepared a dataset consisting of 10,000 layout tasks and a Gym-like RL environment to train and evaluate their models. This dataset is designed based on varying spatial constraints and terrain maps generated procedurally, offering significant diversity for RL model training.
  3. An RL-based Approach Baseline: Baseline results, using their RL-based method, indicate the effectiveness of this approach in addressing the plot facility layout design challenge. It provides solutions for map layout that satisfy narrative constraints, facilitating human-AI co-design workflows.

Results and Discussion

The system integrates a pre-trained LLM to extract story constraints in natural language and translate them into geometric requirements on the map. By employing RL, the system positions plot facilities on a game map, ensuring that story constraints derived from narrative logic are met. The RL agent focuses on assigning locations to the plot facilities by considering spatial relationships like proximity and visibility.

The paper illustrates the flexibility of this approach by showing multiple task completion examples where the same narrative is adapted onto different terrains, with plots maintaining their relative positions according to constraints, albeit on varied geographical layouts. It underscores the potential of their RL strategy to generalize across environments and scenarios, allowing the same story to manifest in unique ways on different maps.

Implications and Future Work

The paper suggests significant practical implications for the games industry, specifically for narrative-driven game design. By automating the plotting process, designers can reduce the time-consuming nature of manual layout design while still adhering to the narrative demands of the storyline. This procedure enhances scalability in producing large, immersive game worlds quickly.

From a theoretical perspective, the proposal opens new avenues in RL research, particularly in handling complex constraint satisfaction problems expressed in natural language. Future research could refine these techniques to better understand designer preferences or accommodate even more complex story-driven constraints.

Future developments might involve exploring distributed RL approaches to improve scalability, as current systems rely on a single RL agent to sequentially handle all plot facilities. Improvements in embedding strategies could also enhance the sample efficiency and generalization capabilities of RL agents by encoding constraints more effectively.

Conclusion

This paper presents a novel approach to integrating narrative constraints into procedural world-building by using RL. It fills an important gap in automated game design, providing a scalable and efficient method to ensure that game maps support their intended stories. The comprehensive dataset and Gym-like environment they offer set a foundation for further exploration and improvement, promising enriched collaborations between human designers and AI in creative endeavors. While there are limitations, the work stands as a crucial step towards more narrative-driven procedurally generated game worlds.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com