Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EvoX: A Distributed GPU-accelerated Framework for Scalable Evolutionary Computation (2301.12457v10)

Published 29 Jan 2023 in cs.NE

Abstract: Inspired by natural evolutionary processes, Evolutionary Computation (EC) has established itself as a cornerstone of Artificial Intelligence. Recently, with the surge in data-intensive applications and large-scale complex systems, the demand for scalable EC solutions has grown significantly. However, most existing EC infrastructures fall short of catering to the heightened demands of large-scale problem solving. While the advent of some pioneering GPU-accelerated EC libraries is a step forward, they also grapple with some limitations, particularly in terms of flexibility and architectural robustness. In response, we introduce EvoX: a computing framework tailored for automated, distributed, and heterogeneous execution of EC algorithms. At the core of EvoX lies a unique programming model to streamline the development of parallelizable EC algorithms, complemented by a computation model specifically optimized for distributed GPU acceleration. Building upon this foundation, we have crafted an extensive library comprising a wide spectrum of 50+ EC algorithms for both single- and multi-objective optimization. Furthermore, the library offers comprehensive support for a diverse set of benchmark problems, ranging from dozens of numerical test functions to hundreds of reinforcement learning tasks. Through extensive experiments across a range of problem scenarios and hardware configurations, EvoX demonstrates robust system and model performances. EvoX is open-source and accessible at: https://github.com/EMI-Group/EvoX.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. T. Bäck, D. B. Fogel, and Z. Michalewicz, “Handbook of evolutionary computation,” Release, vol. 97, no. 1, p. B1, 1997.
  2. C. A. Pena-Reyes and M. Sipper, “Evolutionary computation in medicine: an overview,” Artificial Intelligence in Medicine, vol. 19, no. 1, pp. 1–23, 2000.
  3. K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, “Designing neural networks through neuroevolution,” Nature Machine Intelligence, vol. 1, no. 1, pp. 24–35, 2019.
  4. M. N. Omidvar, X. Li, and X. Yao, “A review of population-based metaheuristics for large-scale black-box global optimization—part ii,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 5, pp. 823–843, 2021.
  5. S. Liu, Q. Lin, J. Li, and K. C. Tan, “A survey on learnable evolutionary algorithms for scalable multiobjective optimization,” IEEE Transactions on Evolutionary Computation, 2023.
  6. Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, and K. C. Tan, “A survey on evolutionary neural architecture search,” IEEE transactions on neural networks and learning systems, 2021.
  7. Z.-H. Zhan, J.-Y. Li, and J. Zhang, “Evolutionary deep learning: A survey,” Neurocomputing, vol. 483, pp. 42–58, 2022.
  8. R. Miikkulainen and S. Forrest, “A biological perspective on evolutionary computation,” Nature Machine Intelligence, vol. 3, no. 1, pp. 9–15, 2021.
  9. F.-A. Fortin, F.-M. De Rainville, M.-A. Gardner, M. Parizeau, and C. Gagné, “DEAP: Evolutionary algorithms made easy,” Journal of Machine Learning Research, vol. 13, pp. 2171–2175, jul 2012.
  10. A. F. Gad, “PyGAD: An Intuitive Genetic Algorithm Python Library,” 2021.
  11. J. Blank and K. Deb, “Pymoo: Multi-objective Optimization in Python,” IEEE Access, vol. 8, pp. 89 497–89 509, 2020.
  12. F. Biscani and D. Izzo, “A parallel global multiobjective framework for optimization: pagmo,” Journal of Open Source Software, vol. 5, no. 53, p. 2338, 2020.
  13. Y. Tang, Y. Tian, and D. Ha, “Evojax: Hardware-accelerated neuroevolution,” arXiv preprint arXiv:2202.05008, 2022.
  14. R. T. Lange, “evosax: Jax-based evolution strategies,” arXiv preprint arXiv:2212.04180, 2022.
  15. N. E. Toklu, T. Atkinson, V. Micka, P. Liskowski, and R. K. Srivastava, “EvoTorch: Scalable evolutionary computation in Python,” arXiv preprint, 2023, https://arxiv.org/abs/2302.12600.
  16. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, “JAX: composable transformations of Python+NumPy programs,” 2018. [Online]. Available: http://github.com/google/jax
  17. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, and R. Garnett, Eds.   Curran Associates, Inc., 2019, pp. 8024–8035. [Online]. Available: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
  18. P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, and I. Stoica, “Ray: A Distributed Framework for Emerging AI Applications,” in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), Oct. 2018, pp. 561–577.
  19. H. Bai, R. Cheng, and Y. Jin, “Evolutionary Reinforcement Learning: A Survey,” arXiv preprint arXiv:2303.04150, 2023.
  20. Y. Xu, H. Lee, D. Chen, B. Hechtman, Y. Huang, R. Joshi, M. Krikun, D. Lepikhin, A. Ly, M. Maggioni, R. Pang, N. Shazeer, S. Wang, T. Wang, Y. Wu, and Z. Chen, “Gspmd: General and Scalable Parallelization for ML Computation Graphs,” 2021.
  21. T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, “Evolution Strategies as a Scalable Alternative to Reinforcement Learning,” 2017.
  22. N. Hansen, S. D. Müller, and P. Koumoutsakos, “Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES),” Evolutionary Computation, vol. 11, no. 1, pp. 1–18, 2003.
  23. F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters, and J. Schmidhuber, “Parameter-exploring policy gradients,” Neural Networks, vol. 23, no. 4, pp. 551–559, 2010, the 18th International Conference on Artificial Neural Networks, ICANN 2008.
  24. M. Nomura and I. Ono, “Fast Moving Natural Evolution Strategy for High-Dimensional Problems,” in 2022 IEEE Congress on Evolutionary Computation (CEC), 2022, pp. 1–8.
  25. T. Glasmachers, T. Schaul, S. Yi, D. Wierstra, and J. Schmidhuber, “Exponential Natural Evolution Strategies,” ser. GECCO ’10.   New York, NY, USA: Association for Computing Machinery, 2010, p. 393–400.
  26. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, June 2004.
  27. R. Cheng and Y. Jin, “A Competitive Swarm Optimizer for Large Scale Optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 2, pp. 191–204, Feb 2015.
  28. F. van den Bergh and A. Engelbrecht, “A Cooperative approach to particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004.
  29. J. Liang, A. Qin, P. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, June 2006.
  30. R. Cheng and Y. Jin, “Demonstrator selection in a social learning particle swarm optimizer,” in 2014 IEEE Congress on Evolutionary Computation (CEC), July 2014, pp. 3103–3110.
  31. Y. Wang, Z. Cai, and Q. Zhang, “Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 55–66, 2011.
  32. J. Zhang and A. C. Sanderson, “JADE: Adaptive Differential Evolution With Optional External Archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009.
  33. A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization,” IEEE transactions on Evolutionary Computation, vol. 13, no. 2, pp. 398–417, 2008.
  34. R. Tanabe and A. Fukunaga, “Success-history based parameter adaptation for differential evolution,” in 2013 IEEE Congress on Evolutionary Computation.   IEEE, 2013, pp. 71–78.
  35. K. M. Sallam, S. M. Elsayed, R. K. Chakrabortty, and M. J. Ryan, “Improved Multi-operator Differential Evolution Algorithm for Solving Unconstrained Problems,” in 2020 IEEE Congress on Evolutionary Computation (CEC).   IEEE, 2020, pp. 1–8.
  36. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
  37. K. Deb and H. Jain, “An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, 2014.
  38. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm,” TIK report, vol. 103, 2001.
  39. M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objective optimization problems,” Artificial Intelligence, vol. 228, pp. 45–65, 2015.
  40. X. Zhang, Y. Tian, and Y. Jin, “A Knee Point-Driven Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 6, pp. 761–776, 2015.
  41. Q. Zhang and H. Li, “MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
  42. R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, 2016.
  43. Y. Yuan, H. Xu, B. Wang, and X. Yao, “A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 1, pp. 16–37, 2016.
  44. H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 3, pp. 450–455, 2014.
  45. X. Cai, Y. Li, Z. Fan, and Q. Zhang, “An External Archive Guided Multiobjective Evolutionary Algorithm Based on Decomposition for Combinatorial Optimization,” IEEE Transactions on Evolutionary Computation, vol. 19, no. 4, pp. 508–523, 2015.
  46. E. Zitzler and S. Künzli, “Indicator-Based Selection in Multiobjective Search,” in Parallel Problem Solving from Nature - PPSN VIII, X. Yao, E. K. Burke, J. A. Lozano, J. Smith, J. J. Merelo-Guervós, J. A. Bullinaria, J. E. Rowe, P. Tiňo, A. Kabán, and H.-P. Schwefel, Eds.   Berlin, Heidelberg: Springer Berlin Heidelberg, 2004, pp. 832–842.
  47. J. Bader and E. Zitzler, “HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization,” Evolutionary Computation, vol. 19, no. 1, pp. 45–76, 03 2011.
  48. B. Li, K. Tang, J. Li, and X. Yao, “Stochastic Ranking Algorithm for Many-Objective Optimization Based on Multiple Indicators,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 6, pp. 924–938, 2016.
  49. Y. Sun, G. G. Yen, and Z. Yi, “Igd Indicator-based Evolutionary Algorithm for Many-objective Optimization Problems,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 2, pp. 173–187, 2018.
  50. Y. Tian, R. Cheng, X. Zhang, F. Cheng, and Y. Jin, “An Indicator-Based Multiobjective Evolutionary Algorithm with Reference Point Adaptation for Better Versatility,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 4, pp. 609–622, 2018.
  51. C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem, “Brax - a differentiable physics engine for large scale rigid body simulation,” 2021. [Online]. Available: http://github.com/google/brax
  52. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “OpenAI Gym,” 2016.
  53. A. Ahrari, S. Elsayed, R. Sarker, D. Essam, and C. A. C. Coello, “Problem definition and evaluation criteria for the cec’2022 competition on dynamic multimodal optimization,” in Proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI 2022), Padua, Italy, 2022, pp. 18–23.
  54. E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000.
  55. R. Cheng, M. Li, Y. Tian, X. Zhang, S. Yang, Y. Jin, and X. Yao, “A benchmark test suite for evolutionary many-objective optimization,” Complex & Intelligent Systems, vol. 3, pp. 67–81, 2017.
  56. R. Cheng, Y. Jin, M. Olhofer, and B. sendhoff, “Test Problems for Large-Scale Multiobjective and Many-Objective Optimization,” IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4108–4121, 2017.
  57. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95 - International Conference on Neural Networks, vol. 4, Nov. 1995, pp. 1942–1948 vol.4.
  58. R. Storn and K. Price, “Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, Dec. 1997.
  59. A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009. [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  60. V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in International Conference on Machine Learning (ICML), 2010, pp. 807–814.
  61. F. Sehnke, C. Osendorfer, T. Rückstieß, A. Graves, J. Peters, and J. Schmidhuber, “Parameter-exploring policy gradients,” Neural Networks, vol. 23, no. 4, pp. 551–559, May 2010.
  62. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  63. P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov, “Openai baselines,” https://github.com/openai/baselines, 2017.
  64. M. Towers, J. K. Terry, A. Kwiatkowski, J. U. Balis, G. d. Cola, T. Deleu, M. Goulão, A. Kallinteris, A. KG, M. Krimmel, R. Perez-Vicente, A. Pierré, S. Schulhoff, J. J. Tai, A. T. J. Shen, and O. G. Younis, “Gymnasium,” Mar. 2023. [Online]. Available: https://zenodo.org/record/8127025
Citations (10)

Summary

  • The paper introduces EvoX, a novel framework that leverages distributed GPUs to significantly reduce evolutionary computation times.
  • It employs a functional programming model and supports over 50 EC algorithms for both single- and multi-objective optimization.
  • Extensive experiments demonstrate an order of magnitude speedup and excellent scalability in multi-node settings for complex tasks.

Insights into EvoX: A Distributed GPU-Accelerated Framework for Scalable Evolutionary Computation

The paper introduces EvoX, an advanced computing framework specifically designed for scalable Evolutionary Computation (EC). Within the expansive context of Artificial Intelligence, EC stands out for its adaptability and resilience, thus making it a valuable tool in addressing diverse and complex problem domains. However, current advancements in data-intensive applications necessitate scalable and efficient EC solutions that can handle the increased demands of large-scale problem solving. EvoX emerges as a solution aimed at optimizing EC processes through GPU acceleration, distributed task execution, and a unique architectural approach.

Core Contributions

The EvoX framework is structured around a unique programming paradigm that emphasizes parallel execution, maximizing the inherent parallelizability of EC tasks. It offers a wide-ranging library of over 50 EC algorithms, covering both single- and multi-objective optimization. Coupled with this, the framework provides extensive support for multiple benchmark problems, ranging from basic numerical functions to sophisticated reinforcement learning tasks. Through systematic experimentation, EvoX has demonstrated strong performance gains across a plethora of scenarios indicative of substantial system and model improvements.

The framework's strategy is underpinned by a straightforward functional programming model, which simplifies the development of EC algorithms while facilitating their parallel execution. A hierarchical state management system further aids computation, especially in distributed contexts where synchronization is required across multiple GPUs.

Robust Numerical Results

Importantly, the paper delineates several compelling numerical results. The execution times of EC algorithms are significantly reduced with GPU acceleration, especially as problem dimensions and population sizes increase. These improvements are validated through various EC algorithms like PSO, DE, and CMA-ES for single-objective optimization, as well as NSGA-II, MOEA/D, and IBEA for multi-objective tasks. EvoX demonstrates notable reductions in computation time, often achieving an order of magnitude speedup over CPU counterparts.

In the field of multi-node acceleration, experiments show that EvoX scales efficiently with the addition of GPU nodes. Task completion times decrease dramatically, thereby evidencing the framework's ability to harness the collective computational power of distributed resources effectively. Notably, the performance of EvoX in executing neuroevolution tasks indicates substantial improvements over established baselines, further asserting its computational efficiency.

Implications and Future Directions

From a practical perspective, EvoX holds promise for any domain where large-scale data processing and optimization are critical. It stands to significantly benefit areas such as evolutionary multitasking and transfer optimization via efficient resource utilization and accelerated processing.

Theoretically, the approach adopted by EvoX provides a sound infrastructure for pushing the boundaries of scalable evolutionary algorithms. By addressing the scalability bottlenecks and enhancing the agility of EC algorithms through distributed execution models and GPU optimization, EvoX advances the state of the field substantially.

Looking ahead, further exploration into the development of additional EC applications using EvoX could yield significant advancements, particularly within the realms of Evolutionary Multitasking and Transfer Optimization. Additionally, continuous improvements in computing architectures will allow EvoX to maintain its relevance in the evolving AI landscape.

In conclusion, the EvoX framework represents a significant stride in the development of scalable EC solutions. Its ability to leverage distributed GPU resources, coupled with an innovative programming framework, establishes a crucial platform that can tackle the demands of contemporary large-scale computational challenges effectively.