Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Craftium: An Extensible Framework for Creating Reinforcement Learning Environments (2407.03969v1)

Published 4 Jul 2024 in cs.AI

Abstract: Most Reinforcement Learning (RL) environments are created by adapting existing physics simulators or video games. However, they usually lack the flexibility required for analyzing specific characteristics of RL methods often relevant to research. This paper presents Craftium, a novel framework for exploring and creating rich 3D visual RL environments that builds upon the Minetest game engine and the popular Gymnasium API. Minetest is built to be extended and can be used to easily create voxel-based 3D environments (often similar to Minecraft), while Gymnasium offers a simple and common interface for RL research. Craftium provides a platform that allows practitioners to create fully customized environments to suit their specific research requirements, ranging from simple visual tasks to infinite and procedurally generated worlds. We also provide five ready-to-use environments for benchmarking and as examples of how to develop new ones. The code and documentation are available at https://github.com/mikelma/craftium/.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Scaling instructable agents across many simulated worlds. arXiv preprint arXiv:2404.10179, 2024.
  2. Griddly: A platform for ai research in games. arXiv preprint arXiv:2011.06363, 2020.
  3. Human-timescale adaptation in an open-ended task space. In International Conference on Machine Learning (ICML) of 2023, volume 202, pages 1887–1935. PMLR, 2023.
  4. Deepmind Lab. arXiv preprint arXiv:1612.03801, 2016.
  5. The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
  6. Proximal and Sparse Resolution of Constrained Dynamic Equations. In Robotics: Science and Systems of 2021, 2021.
  7. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. arXiv preprint arXiv:2306.13831, 2023.
  8. Leveraging procedural generation to benchmark reinforcement learning. arXiv preprint arXiv:1912.01588, 2020.
  9. CARLA: An open urban driving simulator. In the 1st Annual Conference on Robot Learning (CoRL), pages 1–16, 2017.
  10. MineDojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems (NeurIPS) of 2022, 35:18343–18362, 2022.
  11. panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning. 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS, 2021.
  12. MineRL: A large-scale dataset of minecraft demonstrations. arXiv preprint arXiv:1907.13440, 2019.
  13. CleanRL: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research, 23(274):1–18, 2022.
  14. The Malmo platform for artificial intelligence experimentation. In International Joint Conference on Artificial Intelligence (IJCAI) of 2016, volume 16, pages 4246–4247, 2016.
  15. The NetHack Learning Environment. Advances in Neural Information Processing Systems (NeurIPS) of 2020, 33:7671–7684, 2020.
  16. Revisiting the Arcade Learning Environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523–562, 2018.
  17. Ray: A distributed framework for emerging AI applications. In USENIX Symposium on Operating Systems Design and Implementation (OSDI) of 2018, pages 561–577, 2018.
  18. Dreaming of many worlds: Learning contextual world models aids zero-shot generalization. arXiv preprint arXiv:2403.10967, 2024.
  19. Stable-Baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021.
  20. MiniHack the planet: A sandbox for open-ended reinforcement learning research. Datasets and Benchmarks Track of the 2021 NeurIPS, 2021.
  21. skrl: Modular and flexible library for reinforcement learning. Journal of Machine Learning Research, 24(254):1–9, 2023.
  22. MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) of 2012, pages 5026–5033, 2012.
  23. ViZDoom Competitions: Playing Doom from Pixels. IEEE Transactions on Games, 11(3):248–259, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Josu Ceberio (12 papers)
  2. Jose A. Lozano (31 papers)
  3. Mikel Malagón (2 papers)

Summary

We haven't generated a summary for this paper yet.