Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning (2407.10483v1)

Published 15 Jul 2024 in cs.LG

Abstract: Graph data structures offer a versatile and powerful means to model relationships and interconnections in various domains, promising substantial advantages in data representation, analysis, and visualization. In games, graph-based data structures are omnipresent and represent, for example, game economies, skill trees or complex, branching quest lines. With this paper, we propose G-PCGRL, a novel and controllable method for the procedural generation of graph data using reinforcement learning. Therefore, we frame this problem as manipulating a graph's adjacency matrix to fulfill a given set of constraints. Our method adapts and extends the Procedural Content Generation via Reinforcement Learning (PCGRL) framework and introduces new representations to frame the problem of graph data generation as a Markov decision process. We compare the performance of our method with the original PCGRL, the run time with a random search and evolutionary algorithm, and evaluate G-PCGRL on two graph data domains in games: game economies and skill trees. The results show that our method is capable of generating graph-based content quickly and reliably to support and inspire designers in the game creation process. In addition, trained models are controllable in terms of the type and number of nodes to be generated.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Game Balance. CRC Press, Boca Raton, August 2021.
  2. Ernest Adams. Fundamentals of Game Design. New Riders Publishing, USA, 3rd edition, 2014.
  3. Paul Klint and Riemer van Rozen. Micro-Machinations. In Software Language Engineering, Lecture Notes in Computer Science, pages 36–55, Cham, 2013. Springer International Publishing.
  4. Search-Based Procedural Content Generation: A Taxonomy and Survey. IEEE Transactions on Computational Intelligence and AI in Games, 3:172–186, 2011.
  5. Pcgrl: Procedural content generation via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages 95–101, 2020.
  6. TOAD-GAN: Coherent Style Level Generation from a Single Example. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 16:10–16, 2020.
  7. On balance and dynamism in procedural content generation with self-adaptive evolutionary algorithms. Natural Computing, 13:157–168, 2014.
  8. Simulation-Driven Balancing of Competitive Game Levels with Reinforcement Learning. IEEE Transactions on Games, pages 1–11, 2024.
  9. Mechanic Miner: Reflection-Driven Game Mechanic Discovery and Level Design. In Applications of Evolutionary Computation, Lecture Notes in Computer Science, pages 284–293. Springer, 2013.
  10. Using Evolutionary Algorithms to Target Complexity Levels in Game Economies. IEEE Transactions on Games, 15(1):56–66, March 2023.
  11. GEEvo: Game Economy Generation and Balancing with Evolutionary Algorithms. In IEEE Congress on Evolutionary Computation (CEC), page to appear, 2024. arXiv:2404.18574.
  12. TropeTwist: Trope-based Narrative Structure Generation. In Proceedings of the 17th International Conference on the Foundations of Digital Games, pages 1–8, Athens Greece, September 2022. ACM.
  13. Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. In Advances in Neural Information Processing Systems, volume 31, 2018.
  14. EvoMol: a flexible and interpretable evolutionary algorithm for unbiased de novo molecular generation. Journal of Cheminformatics, 12(1):55, September 2020.
  15. Deep learning for molecular design—a review of the state of the art. Molecular Systems Design & Engineering, 4(4):828–849, 2019.
  16. Graph grammar-based controllable generation of puzzles for a learning game about parallel programming. In Proceedings of the 12th International Conference on the Foundations of Digital Games, pages 1–10, 2017.
  17. Automatic Generation of Super Mario Levels via Graph Grammars. In 2020 IEEE Conference on Games (CoG), pages 297–304. IEEE, August 2020.
  18. Generative Adversarial Network Rooms in Generative Graph Grammar Dungeons for The Legend of Zelda. In 2020 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE, July 2020.
  19. Evolutionary design of molecules based on deep learning and a genetic algorithm. Scientific Reports, 11(1):17304, August 2021.
  20. Optimization of Molecules via Deep Reinforcement Learning. Scientific Reports, 9(1):10752, July 2019.
  21. Mojang. Mincraft. Mojang and Microsoft Studios, 2011.
  22. Procedural Content Generation via Machine Learning (PCGML). IEEE Transactions on Games, 10:257–270, 2018.
  23. Deep learning for procedural content generation. Neural Computing and Applications, 33:19–37, 2021.
  24. MarioGPT: Open-Ended Text2Level Generation through Large Language Models. NeurIPS, 36:54213–54227, December 2023.
  25. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 575:350–354, 2019.
  26. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362:1140–1144, 2018.
  27. Adversarial Reinforcement Learning for Procedural Content Generation. In 2021 IEEE Conference on Games (CoG), 2021. ISSN: 2325-4289.
  28. Procedural Level Generation for Sokoban via Deep Learning: An Experimental Study. IEEE Transactions on Games, 15(1):108–120, March 2022.
  29. Learning Controllable Content Generators. IEEE Conference on Games (CoG), 2021.
  30. Balancing of competitive two-player Game Levels with Reinforcement Learning. In 2023 IEEE Conference on Games (CoG), 2023. ISSN: 2325-4289.
  31. Learning Controllable 3D Level Generators. Proceedings of the 17th International Conference on the Foundations of Digital Games, 2022.
  32. Proximal Policy Optimization Algorithms, 2017. arXiv:1707.06347.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets