Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Practical PCG Through Large Language Models (2305.18243v3)

Published 20 May 2023 in cs.CL and cs.AI

Abstract: LLMs have proven to be useful tools in various domains outside of the field of their inception, which was natural language processing. In this study, we provide practical directions on how to use LLMs to generate 2D-game rooms for an under-development game, named Metavoidal. Our technique can harness the power of GPT-3 by Human-in-the-loop fine-tuning which allows our method to create 37% Playable-Novel levels from as scarce data as only 60 hand-designed rooms under a scenario of the non-trivial game, with respect to (Procedural Content Generation) PCG, that has a good amount of local and global constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. N. Shaker, J. Togelius, and M. J. Nelson, “Procedural content generation in games,” 2016.
  2. J. Togelius, G. N. Yannakakis, K. O. Stanley, and C. Browne, “Search-based procedural content generation: A taxonomy and survey,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 3, no. 3, pp. 172–186, 2011.
  3. A. Summerville, S. Snodgrass, M. Guzdial, C. Holmgård, A. K. Hoover, A. Isaksen, A. Nealen, and J. Togelius, “Procedural content generation via machine learning (pcgml),” IEEE Transactions on Games, vol. 10, no. 3, pp. 257–270, 2018.
  4. J. Liu, S. Snodgrass, A. Khalifa, S. Risi, G. N. Yannakakis, and J. Togelius, “Deep learning for procedural content generation,” Neural Computing and Applications, vol. 33, no. 1, pp. 19–37, 2021.
  5. M. U. Nasir, M. Beukman, S. James, and C. W. Cleghorn, “Augmentative topology agents for open-ended learning,” arXiv preprint arXiv:2210.11442, 2022.
  6. J. Xu and Z. Zhu, “Reinforced continual learning,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  7. H. Chang, H. Zhang, J. Barber, A. Maschinot, J. Lezama, L. Jiang, M.-H. Yang, K. Murphy, W. T. Freeman, M. Rubinstein et al., “Muse: Text-to-image generation via masked generative transformers,” arXiv preprint arXiv:2301.00704, 2023.
  8. M. U. Nasir, S. Earle, J. Togelius, S. James, and C. Cleghorn, “Llmatic: Neural architecture search via large language models and quality-diversity optimization,” arXiv preprint arXiv:2306.01102, 2023.
  9. G. Todd, S. Earle, M. U. Nasir, M. C. Green, and J. Togelius, “Level generation through large language models,” in Proceedings of the 18th International Conference on the Foundations of Digital Games, 2023, pp. 1–8.
  10. S. Sudhakaran, M. González-Duque, C. Glanois, M. Freiberger, E. Najarro, and S. Risi, “Mariogpt: Open-ended text2level generation through large language models,” arXiv preprint arXiv:2302.05981, 2023.
  11. R. R. Torrado, A. Khalifa, M. C. Green, N. Justesen, S. Risi, and J. Togelius, “Bootstrapping conditional gans for video game level generation,” in 2020 IEEE Conference on Games (CoG).   IEEE, 2020, pp. 41–48.
  12. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Julian Togelius (154 papers)
  2. Muhammad U Nasir (1 paper)
Citations (15)
X Twitter Logo Streamline Icon: https://streamlinehq.com