PlotMap: Automated Layout Design for Building Game Worlds (2309.15242v4)
Abstract: World-building, the process of developing both the narrative and physical world of a game, plays a vital role in the game's experience. Critically-acclaimed independent and AAA video games are praised for strong world-building, with game maps that masterfully intertwine with and elevate the narrative, captivating players and leaving a lasting impression. However, designing game maps that support a desired narrative is challenging, as it requires satisfying complex constraints from various considerations. Most existing map generation methods focus on considerations about gameplay mechanics or map topography, while the need to support the story is typically neglected. As a result, extensive manual adjustment is still required to design a game world that facilitates particular stories. In this work, we approach this problem by introducing an extra layer of plot facility layout design that is independent of the underlying map generation method in a world-building pipeline. Concretely, we define (plot) facility layout tasks as the tasks of assigning concrete locations on a game map to abstract locations mentioned in a given story (plot facilities), following spatial constraints derived from the story. We present two methods for solving these tasks automatically: an evolutionary computation based approach through Covariance Matrix Adaptation Evolution Strategy (CMA-ES), and a Reinforcement Learning (RL) based approach. We develop a method of generating datasets of facility layout tasks, create a gym-like environment for experimenting with and evaluating different methods, and further analyze the two methods with comprehensive experiments, aiming to provide insights for solving facility layout tasks. We will release the code and a dataset containing 10, 000 tasks of different scales.
- Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528 (2019).
- The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research 47 (2013), 253–279.
- Augmenting automated game testing with deep reinforcement learning. In 2020 IEEE Conference on Games (CoG). IEEE, 600–603.
- Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019).
- Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
- Emergent collective intelligence from massive-agent cooperation and competition. arXiv preprint arXiv:2301.01609 (2023).
- Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017).
- Danesh: Interactive tools for understanding procedural content generators. IEEE Transactions on Games 14, 3 (2021), 329–338.
- A.I. Design. 1980. Rogue.
- Polygonal Map Generation for Games. https://github.com/TheFebrin/Polygonal-Map-Generation-for-Games.
- Joris Dormans and Sander Bakkes. 2011. Generating missions and spaces for adaptable play experiences. IEEE Transactions on Computational Intelligence and AI in Games 3, 3 (2011), 216–228.
- Game Freak. 2022. Pokémon Legends: Arceus.
- FromSoftware. 2022. Elden Ring.
- Toward supporting stories with procedurally generated game worlds. In 2011 IEEE Conference on Computational Intelligence and Games (CIG’11). IEEE, 297–304.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
- Procedural content generation for games: A survey. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 9, 1 (2013), 1–22.
- Reinforcement Learning Agents for Ubisoft’s Roller Champions. arXiv preprint arXiv:2012.06031 (2020).
- Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 6443 (2019), 859–865.
- George Kelly and Hugh McCabe. 2017. A Survey of Procedural Techniques for City Generation. The ITB Journal 7, 2 (May 2017). https://doi.org/10.21427/D76M9P
- Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory 28, 2 (1982), 129–137.
- Ryu Matsumoto. 2022. Introduction of Case Studies of Engineer Efforts to Respond to the Open Field of Elden Ring. Computer Entertainment Developers Conference (2022).
- Nintendo. 1986. The Legend of Zelda.
- Nintendo. 2017. The Legend of Zelda: Breath of the Wild.
- Amit Patel. 2010. Polygonal map generation for games. Red Blob Games 4 (2010).
- Automatic Curriculum Learning For Deep RL: A Short Survey. In IJCAI 2020-International Joint Conference on Artificial Intelligence.
- Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748–8763.
- Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
- Sebastian Risi and Julian Togelius. 2020. Increasing generality in machine learning through procedural content generation. Nature Machine Intelligence 2, 8 (2020), 428–436.
- Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
- A Survey of Procedural Methods for Terrain Modelling.
- Tanagra: A mixed-initiative level design tool. In Proceedings of the Fifth International Conference on the Foundations of Digital Games. 209–216.
- Open-ended learning leads to generally capable agents. arXiv preprint arXiv:2107.12808 (2021).
- Search-based procedural content generation: A taxonomy and survey. IEEE Transactions on Computational Intelligence and AI in Games 3, 3 (2011), 172–186.
- Towards story-based content generation: From plot-points to maps. In 2013 IEEE Conference on Computational Inteligence in Games (CIG). IEEE, 1–8.
- Procedural Generation of Dungeons. IEEE Transactions on Computational Intelligence and AI in Games 6, 1 (2014), 78–89. https://doi.org/10.1109/TCIAIG.2013.2290371
- Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 7782 (2019), 350–354.
- Mixed-initiative co-creativity. In International Conference on Foundations of Digital Games.
- A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893 (2018).