2000 character limit reached
Practical PCG Through Large Language Models (2305.18243v3)
Published 20 May 2023 in cs.CL and cs.AI
Abstract: LLMs have proven to be useful tools in various domains outside of the field of their inception, which was natural language processing. In this study, we provide practical directions on how to use LLMs to generate 2D-game rooms for an under-development game, named Metavoidal. Our technique can harness the power of GPT-3 by Human-in-the-loop fine-tuning which allows our method to create 37% Playable-Novel levels from as scarce data as only 60 hand-designed rooms under a scenario of the non-trivial game, with respect to (Procedural Content Generation) PCG, that has a good amount of local and global constraints.
- N. Shaker, J. Togelius, and M. J. Nelson, “Procedural content generation in games,” 2016.
- J. Togelius, G. N. Yannakakis, K. O. Stanley, and C. Browne, “Search-based procedural content generation: A taxonomy and survey,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 3, no. 3, pp. 172–186, 2011.
- A. Summerville, S. Snodgrass, M. Guzdial, C. Holmgård, A. K. Hoover, A. Isaksen, A. Nealen, and J. Togelius, “Procedural content generation via machine learning (pcgml),” IEEE Transactions on Games, vol. 10, no. 3, pp. 257–270, 2018.
- J. Liu, S. Snodgrass, A. Khalifa, S. Risi, G. N. Yannakakis, and J. Togelius, “Deep learning for procedural content generation,” Neural Computing and Applications, vol. 33, no. 1, pp. 19–37, 2021.
- M. U. Nasir, M. Beukman, S. James, and C. W. Cleghorn, “Augmentative topology agents for open-ended learning,” arXiv preprint arXiv:2210.11442, 2022.
- J. Xu and Z. Zhu, “Reinforced continual learning,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- H. Chang, H. Zhang, J. Barber, A. Maschinot, J. Lezama, L. Jiang, M.-H. Yang, K. Murphy, W. T. Freeman, M. Rubinstein et al., “Muse: Text-to-image generation via masked generative transformers,” arXiv preprint arXiv:2301.00704, 2023.
- M. U. Nasir, S. Earle, J. Togelius, S. James, and C. Cleghorn, “Llmatic: Neural architecture search via large language models and quality-diversity optimization,” arXiv preprint arXiv:2306.01102, 2023.
- G. Todd, S. Earle, M. U. Nasir, M. C. Green, and J. Togelius, “Level generation through large language models,” in Proceedings of the 18th International Conference on the Foundations of Digital Games, 2023, pp. 1–8.
- S. Sudhakaran, M. González-Duque, C. Glanois, M. Freiberger, E. Najarro, and S. Risi, “Mariogpt: Open-ended text2level generation through large language models,” arXiv preprint arXiv:2302.05981, 2023.
- R. R. Torrado, A. Khalifa, M. C. Green, N. Justesen, S. Risi, and J. Togelius, “Bootstrapping conditional gans for video game level generation,” in 2020 IEEE Conference on Games (CoG). IEEE, 2020, pp. 41–48.
- T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
- Julian Togelius (154 papers)
- Muhammad U Nasir (1 paper)