Craftium: An Extensible Framework for Creating Reinforcement Learning Environments (2407.03969v1)
Abstract: Most Reinforcement Learning (RL) environments are created by adapting existing physics simulators or video games. However, they usually lack the flexibility required for analyzing specific characteristics of RL methods often relevant to research. This paper presents Craftium, a novel framework for exploring and creating rich 3D visual RL environments that builds upon the Minetest game engine and the popular Gymnasium API. Minetest is built to be extended and can be used to easily create voxel-based 3D environments (often similar to Minecraft), while Gymnasium offers a simple and common interface for RL research. Craftium provides a platform that allows practitioners to create fully customized environments to suit their specific research requirements, ranging from simple visual tasks to infinite and procedurally generated worlds. We also provide five ready-to-use environments for benchmarking and as examples of how to develop new ones. The code and documentation are available at https://github.com/mikelma/craftium/.
- Scaling instructable agents across many simulated worlds. arXiv preprint arXiv:2404.10179, 2024.
- Griddly: A platform for ai research in games. arXiv preprint arXiv:2011.06363, 2020.
- Human-timescale adaptation in an open-ended task space. In International Conference on Machine Learning (ICML) of 2023, volume 202, pages 1887–1935. PMLR, 2023.
- Deepmind Lab. arXiv preprint arXiv:1612.03801, 2016.
- The Arcade Learning Environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
- Proximal and Sparse Resolution of Constrained Dynamic Equations. In Robotics: Science and Systems of 2021, 2021.
- Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. arXiv preprint arXiv:2306.13831, 2023.
- Leveraging procedural generation to benchmark reinforcement learning. arXiv preprint arXiv:1912.01588, 2020.
- CARLA: An open urban driving simulator. In the 1st Annual Conference on Robot Learning (CoRL), pages 1–16, 2017.
- MineDojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems (NeurIPS) of 2022, 35:18343–18362, 2022.
- panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning. 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS, 2021.
- MineRL: A large-scale dataset of minecraft demonstrations. arXiv preprint arXiv:1907.13440, 2019.
- CleanRL: High-quality single-file implementations of deep reinforcement learning algorithms. Journal of Machine Learning Research, 23(274):1–18, 2022.
- The Malmo platform for artificial intelligence experimentation. In International Joint Conference on Artificial Intelligence (IJCAI) of 2016, volume 16, pages 4246–4247, 2016.
- The NetHack Learning Environment. Advances in Neural Information Processing Systems (NeurIPS) of 2020, 33:7671–7684, 2020.
- Revisiting the Arcade Learning Environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523–562, 2018.
- Ray: A distributed framework for emerging AI applications. In USENIX Symposium on Operating Systems Design and Implementation (OSDI) of 2018, pages 561–577, 2018.
- Dreaming of many worlds: Learning contextual world models aids zero-shot generalization. arXiv preprint arXiv:2403.10967, 2024.
- Stable-Baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22(268):1–8, 2021.
- MiniHack the planet: A sandbox for open-ended reinforcement learning research. Datasets and Benchmarks Track of the 2021 NeurIPS, 2021.
- skrl: Modular and flexible library for reinforcement learning. Journal of Machine Learning Research, 24(254):1–9, 2023.
- MuJoCo: A physics engine for model-based control. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) of 2012, pages 5026–5033, 2012.
- ViZDoom Competitions: Playing Doom from Pixels. IEEE Transactions on Games, 11(3):248–259, 2019.
- Josu Ceberio (12 papers)
- Jose A. Lozano (31 papers)
- Mikel Malagón (2 papers)