EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary Computation (2301.12457v10)
Abstract: Inspired by natural evolutionary processes, Evolutionary Computation (EC) has established itself as a cornerstone of Artificial Intelligence. Recently, with the surge in data-intensive applications and large-scale complex systems, the demand for scalable EC solutions has grown significantly. However, most existing EC infrastructures fall short of catering to the heightened demands of large-scale problem solving. While the advent of some pioneering GPU-accelerated EC libraries is a step forward, they also grapple with some limitations, particularly in terms of flexibility and architectural robustness. In response, we introduce EvoX: a computing framework tailored for automated, distributed, and heterogeneous execution of EC algorithms. At the core of EvoX lies a unique programming model to streamline the development of parallelizable EC algorithms, complemented by a computation model specifically optimized for distributed GPU acceleration. Building upon this foundation, we have crafted an extensive library comprising a wide spectrum of 50+ EC algorithms for both single- and multi-objective optimization. Furthermore, the library offers comprehensive support for a diverse set of benchmark problems, ranging from dozens of numerical test functions to hundreds of reinforcement learning tasks. Through extensive experiments across a range of problem scenarios and hardware configurations, EvoX demonstrates robust system and model performances. EvoX is open-source and accessible at: https://github.com/EMI-Group/EvoX.
- T. Bäck, D. B. Fogel, and Z. Michalewicz, “Handbook of evolutionary computation,” Release, vol. 97, no. 1, p. B1, 1997.
- C. A. Pena-Reyes and M. Sipper, “Evolutionary computation in medicine: an overview,” Artificial Intelligence in Medicine, vol. 19, no. 1, pp. 1–23, 2000.
- K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nature Machine Intelligence, vol. 1, no. 1, pp. 24–35, 2019.
- M. N. Omidvar, X. Li, and X. Yao, “A review of population-based metaheuristics for large-scale black-box global optimization—part ii,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 5, pp. 823–843, 2021.
- S. Liu, Q. Lin, J. Li, and K. C. Tan, “A survey on learnable evolutionary algorithms for scalable multiobjective optimization,” IEEE Transactions on Evolutionary Computation, 2023.
- Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, and K. C. Tan, “A survey on evolutionary neural architecture search,” IEEE transactions on neural networks and learning systems, 2021.
- Z.-H. Zhan, J.-Y. Li, and J. Zhang, “Evolutionary deep learning: A survey,” Neurocomputing, vol. 483, pp. 42–58, 2022.
- R. Miikkulainen and S. Forrest, “A biological perspective on evolutionary computation,” Nature Machine Intelligence, vol. 3, no. 1, pp. 9–15, 2021.
- F.-A. Fortin, F.-M. De Rainville, M.-A. Gardner, M. Parizeau, and C. Gagné, “DEAP: Evolutionary algorithms made easy,” Journal of Machine Learning Research, vol. 13, pp. 2171–2175, jul 2012.
- A. F. Gad, “PyGAD: An Intuitive Genetic Algorithm Python Library,” 2021.
- J. Blank and K. Deb, “Pymoo: Multi-objective Optimization in Python,” IEEE Access, vol. 8, pp. 89 497–89 509, 2020.
- F. Biscani and D. Izzo, “A parallel global multiobjective framework for optimization: pagmo,” Journal of Open Source Software, vol. 5, no. 53, p. 2338, 2020.
- Y. Tang, Y. Tian, and D. Ha, “Evojax: Hardware-accelerated neuroevolution,” arXiv preprint arXiv:2202.05008, 2022.
- R. T. Lange, “evosax: Jax-based evolution strategies,” arXiv preprint arXiv:2212.04180, 2022.
- N. E. Toklu, T. Atkinson, V. Micka, P. Liskowski, and R. K. Srivastava, “EvoTorch: Scalable evolutionary computation in Python,” arXiv preprint, 2023, https://arxiv.org/abs/2302.12600.
- J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, “JAX: composable transformations of Python+NumPy programs,” 2018. [Online]. Available: http://github.com/google/jax
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, Eds. Curran Associates, Inc., 2019, pp. 8024–8035. [Online]. Available: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
- P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, and I. Stoica, “Ray: A Distributed Framework for Emerging AI Applications,” in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), Oct. 2018, pp. 561–577.
- H. Bai, R. Cheng, and Y. Jin, “Evolutionary Reinforcement Learning: A Survey,” arXiv preprint arXiv:2303.04150, 2023.
- Y. Xu, H. Lee, D. Chen, B. Hechtman, Y. Huang, R. Joshi, M. Krikun, D. Lepikhin, A. Ly, M. Maggioni, R. Pang, N. Shazeer, S. Wang, T. Wang, Y. Wu, and Z. Chen, “Gspmd: General and Scalable Parallelization for ML Computation Graphs,” 2021.
- T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, “Evolution Strategies as a Scalable Alternative to Reinforcement Learning,” 2017.
- N. Hansen, S. D. Müller, and P. Koumoutsakos, “Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES),” Evolutionary Computation, vol. 11, no. 1, pp. 1–18, 2003.
- F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters, and J. Schmidhuber, “Parameter-exploring policy gradients,” Neural Networks, vol. 23, no. 4, pp. 551–559, 2010, the 18th International Conference on Artificial Neural Networks, ICANN 2008.
- M. Nomura and I. Ono, “Fast Moving Natural Evolution Strategy for High-Dimensional Problems,” in 2022 IEEE Congress on Evolutionary Computation (CEC), 2022, pp. 1–8.
- T. Glasmachers, T. Schaul, S. Yi, D. Wierstra, and J. Schmidhuber, “Exponential Natural Evolution Strategies,” ser. GECCO ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 393–400.
- R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, June 2004.
- R. Cheng and Y. Jin, “A Competitive Swarm Optimizer for Large Scale Optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 2, pp. 191–204, Feb 2015.
- F. van den Bergh and A. Engelbrecht, “A Cooperative approach to particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
- J. Liang, A. Qin, P. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, June 2006.
- R. Cheng and Y. Jin, “Demonstrator selection in a social learning particle swarm optimizer,” in 2014 IEEE Congress on Evolutionary Computation (CEC), July 2014, pp. 3103–3110.
- Y. Wang, Z. Cai, and Q. Zhang, “Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 55–66, 2011.
- J. Zhang and A. C. Sanderson, “JADE: Adaptive Differential Evolution With Optional External Archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009.
- A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization,” IEEE transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2008.
- R. Tanabe and A. Fukunaga, “Success-history based parameter adaptation for differential evolution,” in 2013 IEEE Congress on Evolutionary Computation. IEEE, 2013, pp. 71–78.
- K. M. Sallam, S. M. Elsayed, R. K. Chakrabortty, and M. J. Ryan, “Improved Multi-operator Differential Evolution Algorithm for Solving Unconstrained Problems,” in 2020 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2020, pp. 1–8.
- K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
- K. Deb and H. Jain, “An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.
- E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm,” TIK report, vol. 103, 2001.
- M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objective optimization problems,” Artificial Intelligence, vol. 228, pp. 45–65, 2015.
- X. Zhang, Y. Tian, and Y. Jin, “A Knee Point-Driven Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 6, pp. 761–776, 2015.
- Q. Zhang and H. Li, “MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
- R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, 2016.
- Y. Yuan, H. Xu, B. Wang, and X. Yao, “A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016.
- H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 450–455, 2014.
- X. Cai, Y. Li, Z. Fan, and Q. Zhang, “An External Archive Guided Multiobjective Evolutionary Algorithm Based on Decomposition for Combinatorial Optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 4, pp. 508–523, 2015.
- E. Zitzler and S. Künzli, “Indicator-Based Selection in Multiobjective Search,” in Parallel Problem Solving from Nature - PPSN VIII, X. Yao, E. K. Burke, J. A. Lozano, J. Smith, J. J. Merelo-Guervós, J. A. Bullinaria, J. E. Rowe, P. Tiňo, A. Kabán, and H.-P. Schwefel, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 832–842.
- J. Bader and E. Zitzler, “HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization,” Evolutionary Computation, vol. 19, no. 1, pp. 45–76, 03 2011.
- B. Li, K. Tang, J. Li, and X. Yao, “Stochastic Ranking Algorithm for Many-Objective Optimization Based on Multiple Indicators,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 924–938, 2016.
- Y. Sun, G. G. Yen, and Z. Yi, “Igd Indicator-based Evolutionary Algorithm for Many-objective Optimization Problems,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 2, pp. 173–187, 2018.
- Y. Tian, R. Cheng, X. Zhang, F. Cheng, and Y. Jin, “An Indicator-Based Multiobjective Evolutionary Algorithm with Reference Point Adaptation for Better Versatility,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 4, pp. 609–622, 2018.
- C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem, “Brax - a differentiable physics engine for large scale rigid body simulation,” 2021. [Online]. Available: http://github.com/google/brax
- G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “OpenAI Gym,” 2016.
- A. Ahrari, S. Elsayed, R. Sarker, D. Essam, and C. A. C. Coello, “Problem definition and evaluation criteria for the cec’2022 competition on dynamic multimodal optimization,” in Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy, 2022, pp. 18–23.
- E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000.
- R. Cheng, M. Li, Y. Tian, X. Zhang, S. Yang, Y. Jin, and X. Yao, “A benchmark test suite for evolutionary many-objective optimization,” Complex & Intelligent Systems, vol. 3, pp. 67–81, 2017.
- R. Cheng, Y. Jin, M. Olhofer, and B. sendhoff, “Test Problems for Large-Scale Multiobjective and Many-Objective Optimization,” IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4108–4121, 2017.
- J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95 - International Conference on Neural Networks, vol. 4, Nov. 1995, pp. 1942–1948 vol.4.
- R. Storn and K. Price, “Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, Dec. 1997.
- A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
- V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in International Conference on Machine Learning (ICML), 2010, pp. 807–814.
- F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters, and J. Schmidhuber, “Parameter-exploring policy gradients,” Neural Networks, vol. 23, no. 4, pp. 551–559, May 2010.
- J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov, “Openai baselines,” https://github.com/openai/baselines, 2017.
- M. Towers, J. K. Terry, A. Kwiatkowski, J. U. Balis, G. d. Cola, T. Deleu, M. Goulão, A. Kallinteris, A. KG, M. Krimmel, R. Perez-Vicente, A. Pierré, S. Schulhoff, J. J. Tai, A. T. J. Shen, and O. G. Younis, “Gymnasium,” Mar. 2023. [Online]. Available: https://zenodo.org/record/8127025