PCGPT: Procedural Content Generation via Transformers (2310.02405v1)
Abstract: The paper presents the PCGPT framework, an innovative approach to procedural content generation (PCG) using offline reinforcement learning and transformer networks. PCGPT utilizes an autoregressive model based on transformers to generate game levels iteratively, addressing the challenges of traditional PCG methods such as repetitive, predictable, or inconsistent content. The framework models trajectories of actions, states, and rewards, leveraging the transformer's self-attention mechanism to capture temporal dependencies and causal relationships. The approach is evaluated in the Sokoban puzzle game, where the model predicts items that are needed with their corresponding locations. Experimental results on the game Sokoban demonstrate that PCGPT generates more complex and diverse game content. Interestingly, it achieves these results in significantly fewer steps compared to existing methods, showcasing its potential for enhancing game design and online content generation. Our model represents a new PCG paradigm which outperforms previous methods.
- Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
- P. Bontrager and J. Togelius. Learning to generate levels from nothing. In 2021 IEEE Conference on Games (CoG), pages 1–8. IEEE, 2021.
- C. Browne and F. Maire. Evolutionary game design. IEEE Transactions on Computational Intelligence and AI in Games, 2(1):1–16, 2010.
- End-to-end object detection with transformers. In European conference on computer vision, pages 213–229. Springer, 2020.
- Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34:15084–15097, 2021.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Challenges of real-world reinforcement learning: definitions, benchmarks and analysis. Machine Learning, 110(9):2419–2468, 2021.
- Learning controllable content generators. In 2021 IEEE Conference on Games (CoG), pages 1–9. IEEE, 2021.
- Constrained level generation through grammar-based evolutionary algorithms. In Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30–April 1, 2016, Proceedings, Part I 19, pages 558–573. Springer, 2016.
- Adversarial reinforcement learning for procedural content generation. In 2021 IEEE Conference on Games (CoG), pages 1–8. IEEE, 2021.
- Procedural content generation for games: A survey. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 9(1):1–22, 2013.
- S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
- Optimizing agent behavior over long time scales by transporting value. Nature communications, 10(1):5223, 2019.
- Yannakakis georgios n., and togelius julian. Deep learning for procedural content generation. Neural Computing and Applications, pages 1–19, 2020.
- Cellular automata for real-time generation of infinite cave levels. In Proceedings of the 2010 Workshop on Procedural Content Generation in Games, pages 1–4, 2010.
- Pcgrl: Procedural content generation via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages 95–101, 2020.
- Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(6):4909–4926, 2021.
- Multi-game decision transformers. Advances in Neural Information Processing Systems, 35:27921–27936, 2022.
- Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble. In Conference on Robot Learning, pages 1702–1712. PMLR, 2022.
- Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
- Deep learning for procedural content generation. Neural Computing and Applications, 33(1):19–37, 2021.
- Reinforcement learning for clinical decision support in critical care: comprehensive review. Journal of medical Internet research, 22(7):e18477, 2020.
- Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021.
- Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
- Improving language understanding by generative pre-training. 2018.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
- S. Risi and J. Togelius. Increasing generality in machine learning through procedural content generation. Nature Machine Intelligence, 2(8):428–436, 2020.
- Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
- Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
- Conditional level generation and game blending. arXiv preprint arXiv:2010.07735, 2020.
- Procedural content generation in games. 2016.
- Reinforcement learning in robotic applications: a comprehensive survey. Artificial Intelligence Review, pages 1–46, 2022.
- Reinforcement learning for education: Opportunities and challenges. arXiv preprint arXiv:2107.08828, 2021.
- A. M. Smith and M. Mateas. Answer set programming for procedural content generation: A design space approach. IEEE Transactions on Computational Intelligence and AI in Games, 3(3):187–200, 2011.
- Generation of sokoban stages using recurrent neural networks. International Journal of Advanced Computer Science and Applications, 8(3), 2017.
- Procedural content generation via machine learning (pcgml). IEEE Transactions on Games, 10(3):257–270, 2018.
- Reinforcement learning: An introduction. MIT press, 2018.
- Level generation through large language models. In Proceedings of the 18th International Conference on the Foundations of Digital Games, pages 1–8, 2023.
- Bootstrapping conditional gans for video game level generation. In 2020 IEEE Conference on Games (CoG), pages 41–48. IEEE, 2020.
- Transformer based reinforcement learning for games. arXiv preprint arXiv:1912.03918, 2019.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. Advances in Neural Information Processing Systems, 34:16158–16170, 2021.
- Transformers in time series: A survey. arXiv preprint arXiv:2202.07125, 2022.
- G. N. Yannakakis and J. Togelius. Artificial intelligence and games, volume 2. Springer, 2018.
- Self-attention generative adversarial networks. In International conference on machine learning, pages 7354–7363. PMLR, 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.