Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RaidEnv: Exploring New Challenges in Automated Content Balancing for Boss Raid Games (2307.01676v1)

Published 4 Jul 2023 in cs.AI

Abstract: The balance of game content significantly impacts the gaming experience. Unbalanced game content diminishes engagement or increases frustration because of repetitive failure. Although game designers intend to adjust the difficulty of game content, this is a repetitive, labor-intensive, and challenging process, especially for commercial-level games with extensive content. To address this issue, the game research community has explored automated game balancing using AI techniques. However, previous studies have focused on limited game content and did not consider the importance of the generalization ability of playtesting agents when encountering content changes. In this study, we propose RaidEnv, a new game simulator that includes diverse and customizable content for the boss raid scenario in MMORPG games. Additionally, we design two benchmarks for the boss raid scenario that can aid in the practical application of game AI. These benchmarks address two open problems in automatic content balancing, and we introduce two evaluation metrics to provide guidance for AI in automatic content balancing. This novel game research platform expands the frontiers of automatic game balancing problems and offers a framework within a realistic game production pipeline.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. E. R. Poromaa, “Crushing candy crush: predicting human success rate in a mobile game using monte-carlo tree search,” 2017.
  2. S. F. Gudmundsson, P. Eisen, E. Poromaa, A. Nodet, S. Purmonen, B. Kozakowski, R. Meurling, and L. Cao, “Human-like playtesting with deep learning,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG).   IEEE, 2018, pp. 1–8.
  3. F. Lorenzo, S. Asadi, A. Karnsund, L. Cao, T. Wang, and A. H. Payberah, “Use all your skills, not only the most popular ones,” in 2020 IEEE Conference on Games (CoG).   IEEE, 2020, pp. 682–685.
  4. T. Liu, Z. Zheng, H. Li, K. Bian, and L. Song, “Playing card-based rts games with deep reinforcement learning.” in IJCAI, 2019, pp. 4540–4546.
  5. A. Summerville, S. Snodgrass, M. Guzdial, C. Holmgård, A. K. Hoover, A. Isaksen, A. Nealen, and J. Togelius, “Procedural content generation via machine learning (pcgml),” IEEE Transactions on Games, vol. 10, no. 3, pp. 257–270, 2018.
  6. L. Gisslén, A. Eakins, C. Gordillo, J. Bergdahl, and K. Tollmar, “Adversarial reinforcement learning for procedural content generation,” in 2021 IEEE Conference on Games (CoG).   IEEE, 2021, pp. 1–8.
  7. A. Dockhorn and S. Mostaghim, “Introducing the hearthstone-ai competition,” arXiv preprint arXiv:1906.04238, 2019.
  8. M. Stephenson, E. Piette, D. J. Soemers, and C. Browne, “Ludii as a competition platform,” in 2019 IEEE conference on Games (CoG).   IEEE, 2019, pp. 1–8.
  9. D. Perez-Liebana, J. Liu, A. Khalifa, R. D. Gaina, J. Togelius, and S. M. Lucas, “General video game ai: A multitrack framework for evaluating agents, games, and content generation algorithms,” IEEE Transactions on Games, vol. 11, no. 3, pp. 195–214, 2019.
  10. M. Wydmuch, M. Kempka, and W. Jaśkowski, “ViZDoom Competitions: Playing Doom from Pixels,” IEEE Transactions on Games, vol. 11, no. 3, pp. 248–259, 2019, the 2022 IEEE Transactions on Games Outstanding Paper Award.
  11. M. Kempka, M. Wydmuch, G. Runc, J. Toczek, and W. Jaśkowski, “ViZDoom: A Doom-based AI research platform for visual reinforcement learning,” in IEEE Conference on Computational Intelligence and Games.   Santorini, Greece: IEEE, Sep 2016, pp. 341–348, the Best Paper Award.
  12. A. Khalifa, “Mario-AI-Framework,” https://github.com/amidos2006/Mario-AI-Framework, Jul 11 2022, [Online; accessed 2023-03-13].
  13. S. Karakovskiy and J. Togelius, “The mario ai benchmark and competitions,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 55–67, 2012.
  14. K. Kurach, A. Raichuk, P. Stańczyk, M. Zając, O. Bachem, L. Espeholt, C. Riquelme, D. Vincent, M. Michalski, O. Bousquet et al., “Google research football: A novel reinforcement learning environment,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 4501–4510.
  15. H. Jia, Y. Hu, Y. Chen, C. Ren, T. Lv, C. Fan, and C. Zhang, “Fever basketball: A complex, flexible, and asynchronized sports game environment for multi-agent reinforcement learning,” arXiv preprint arXiv:2012.03204, 2020.
  16. W. Wang and R. Zhang, “Improved game units balancing in game design through combinatorial optimization,” in 2021 IEEE International Conference on e-Business Engineering (ICEBE).   IEEE, 2021, pp. 64–69.
  17. M. Preuss, T. Pfeiffer, V. Volz, and N. Pflanzl, “Integrated balancing of an rts game: Case study and toolbox refinement,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG).   IEEE, 2018, pp. 1–8.
  18. S. M. Nikolakaki, O. Dibie, A. Beirami, N. Peterson, N. Aghdaie, and K. Zaman, “Competitive balance in team sports games,” in 2020 IEEE Conference on Games (CoG).   IEEE, 2020, pp. 526–533.
  19. J. Pfau, A. Liapis, G. Volkmar, G. N. Yannakakis, and R. Malaka, “Dungeons & replicants: automated game balancing via deep player behavior modeling,” in 2020 IEEE Conference on Games (CoG).   IEEE, 2020, pp. 431–438.
  20. J. Pfau, A. Liapis, G. N. Yannakakis, and R. Malaka, “Dungeons & replicants ii: automated game balancing across multiple difficulty dimensions via deep player behavior modeling,” IEEE Transactions on Games, 2022.
  21. M. Samvelyan, T. Rashid, C. S. De Witt, G. Farquhar, N. Nardelli, T. G. Rudner, C.-M. Hung, P. H. Torr, J. Foerster, and S. Whiteson, “The starcraft multi-agent challenge,” arXiv preprint arXiv:1902.04043, 2019.
  22. B. Ellis, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. N. Foerster, and S. Whiteson, “Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning,” 2022. [Online]. Available: https://arxiv.org/abs/2212.07489
  23. J. Z. Leibo, E. A. Dueñez-Guzman, A. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, and T. Graepel, “Scalable evaluation of multi-agent reinforcement learning with melting pot,” in International conference on machine learning.   PMLR, 2021, pp. 6187–6199.
  24. J. Suarez, Y. Du, P. Isola, and I. Mordatch, “Neural mmo: A massively multiagent game environment for training and evaluating intelligent agents,” arXiv preprint arXiv:1903.00784, 2019.
  25. P. Beau and S. Bakkes, “Automated game balancing of asymmetric video games,” in 2016 IEEE conference on computational intelligence and games (CIG).   IEEE, 2016, pp. 1–8.
  26. S. Reis, L. P. Reis, and N. Lau, “Vgc ai competition-a new model of meta-game balance ai competition,” in 2021 IEEE Conference on Games (CoG).   IEEE, 2021, pp. 01–08.
  27. Y.-J. Gong, J.-X. Guo, D.-L. Lin, Y.-L. Zuo, J.-C. Liang, L.-J. Luo, X.-X. Shao, C. Zhou, and M.-T. Li, “Automated team assembly in mobile games: A data-driven evolutionary approach using a deep learning surrogate,” IEEE Transactions on Games, 2022.
  28. R. Herbrich, T. Minka, and T. Graepel, “Trueskill™: a bayesian skill rating system,” Advances in neural information processing systems, vol. 19, 2006.
  29. E. Arpad, “The rating of chessplayers, past and present,” Arco Pub, vol. 216, 1978.
  30. K. Sorochan and M. Guzdial, “Generating real-time strategy game units using search-based procedural content generation and monte carlo tree search,” arXiv preprint arXiv:2212.03387, 2022.
  31. M. Kaidan, T. Harada, C. Y. Chu, and R. Thawonmas, “Procedural generation of angry birds levels with adjustable difficulty,” in 2016 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2016, pp. 1311–1316.
  32. L. N. Ferreira and C. F. M. Toledo, “Tanager: A generator of feasible and engaging levels for angry birds,” IEEE Transactions on Games, vol. 10, no. 3, pp. 304–316, 2017.
  33. T. Shu, J. Liu, and G. N. Yannakakis, “Experience-driven pcg via reinforcement learning: A super mario bros study,” in 2021 IEEE Conference on Games (CoG).   IEEE, 2021, pp. 1–9.
  34. V. Volz, J. Schrum, J. Liu, S. M. Lucas, A. Smith, and S. Risi, “Evolving mario levels in the latent space of a deep convolutional generative adversarial network,” in Proceedings of the genetic and evolutionary computation conference, 2018, pp. 221–228.
  35. Z. Jiang, S. Earle, M. Green, and J. Togelius, “Learning controllable 3d level generators,” in Proceedings of the 17th International Conference on the Foundations of Digital Games, 2022, pp. 1–9.
  36. M. Seif El-Nasr, B. Aghabeigi, D. Milam, M. Erfani, B. Lameman, H. Maygoli, and S. Mah, “Understanding and evaluating cooperative games,” in Proceedings of the SIGCHI conference on human factors in computing systems, 2010, pp. 253–262.
  37. N. P. Zea, J. L. G. Sánchez, F. L. Gutiérrez, M. J. Cabrera, and P. Paderewski, “Design of educational multiplayer videogames: A vision from collaborative learning,” Advances in Engineering Software, vol. 40, no. 12, pp. 1251–1260, 2009.
  38. A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar, and D. Lange, “Unity: A general platform for intelligent agents,” arXiv preprint arXiv:1809.02627, 2020.
  39. A. Cohen, E. Teng, V.-P. Berges, R.-P. Dong, H. Henry, M. Mattar, A. Zook, and S. Ganguly, “On the use and misuse of absorbing states in multi-agent reinforcement learning,” arXiv preprint arXiv:2111.05992, 2021.
  40. J. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson, “Counterfactual multi-agent policy gradients,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  41. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  42. D. H. Wolpert and K. Tumer, “Optimal payoff functions for members of collectives,” Advances in Complex Systems, vol. 4, no. 02n03, pp. 265–279, 2001.
  43. K. R. McKee, J. Z. Leibo, C. Beattie, and R. Everett, “Quantifying the effects of environment and population diversity in multi-agent reinforcement learning,” Autonomous Agents and Multi-Agent Systems, vol. 36, no. 1, p. 21, 2022.
  44. S. Roohi, A. Relas, J. Takatalo, H. Heiskanen, and P. Hämäläinen, “Predicting game difficulty and churn without players,” in Proceedings of the Annual Symposium on Computer-Human Interaction in Play, 2020, pp. 585–593.
  45. S. Roohi, C. Guckelsberger, A. Relas, H. Heiskanen, J. Takatalo, and P. Hämäläinen, “Predicting game difficulty and engagement using ai players,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CHI PLAY, pp. 1–17, 2021.
  46. J. T. Kristensen, A. Valdivia, and P. Burelli, “Estimating player completion rate in mobile puzzle games using reinforcement learning,” in 2020 IEEE Conference on Games (CoG).   IEEE, 2020, pp. 636–639.
  47. A. Khalifa, P. Bontrager, S. Earle, and J. Togelius, “Pcgrl: Procedural content generation via reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 16, no. 1, 2020, pp. 95–101.
  48. S. Earle, M. Edwards, A. Khalifa, P. Bontrager, and J. Togelius, “Learning controllable content generators,” in 2021 IEEE Conference on Games (CoG).   IEEE, 2021, pp. 1–9.
  49. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” 2017.
  50. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37–52, 1987.
  51. H. Abdi and L. J. Williams, “Principal component analysis,” Wiley interdisciplinary reviews: computational statistics, vol. 2, no. 4, pp. 433–459, 2010.
  52. L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.
Citations (6)

Summary

We haven't generated a summary for this paper yet.