Multiform Evolution for High-Dimensional Problems with Low Effective Dimensionality (2401.00168v1)
Abstract: In this paper, we scale evolutionary algorithms to high-dimensional optimization problems that deceptively possess a low effective dimensionality (certain dimensions do not significantly affect the objective function). To this end, an instantiation of the multiform optimization paradigm is presented, where multiple low-dimensional counterparts of a target high-dimensional task are generated via random embeddings. Since the exact relationship between the auxiliary (low-dimensional) tasks and the target is a priori unknown, a multiform evolutionary algorithm is developed for unifying all formulations into a single multi-task setting. The resultant joint optimization enables the target task to efficiently reuse solutions evolved across various low-dimensional searches via cross-form genetic transfers, hence speeding up overall convergence characteristics. To validate the overall efficacy of our proposed algorithmic framework, comprehensive experimental studies are carried out on well-known continuous benchmark functions as well as a set of practical problems in the hyper-parameter tuning of machine learning models and deep learning models in classification tasks and Predator-Prey games, respectively.
- W. Gong, Y. Wang, Z. Cai, and L. Wang, “Finding multiple roots of nonlinear equation systems via a repulsion-based adaptive differential evolution,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 4, pp. 1499–1513, 2020.
- Y. Jin, H. Wang, T. Chugh, D. Guo, and K. Miettinen, “Data-driven evolutionary optimization: An overview and case studies,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 3, pp. 442–458, 2019.
- Z.-H. Zhan, L. Shi, K. C. Tan, and J. Zhang, “A survey on evolutionary computation for complex continuous optimization,” Artificial Intelligence Review, vol. 55, no. 1, pp. 59–110, 2022.
- X. Ma, X. Li, Q. Zhang, K. Tang, Z. Liang, W. Xie, and Z. Zhu, “A survey on cooperative co-evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 3, pp. 421–441, 2018.
- S. Liu, Q. Lin, Q. Li, and K. C. Tan, “A comprehensive competitive swarm optimizer for large-scale multiobjective optimization,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
- W. Deng, S. Shang, X. Cai, H. Zhao, Y. Zhou, H. Chen, and W. Deng, “Quantum differential evolution with cooperative coevolution framework and hybrid mutation strategy for large scale optimization,” Knowledge-Based Systems, vol. 224, p. 107080, 2021.
- J. Lever, M. Krzywinski, and N. Altman, “Principal component analysis,” Nature Methods, vol. 14, no. 7, pp. 641–642, Jul 2017.
- B. Schölkopf, A. Smola, and K.-R. Müller, “Kernel principal component analysis,” in Artificial Neural Networks — ICANN’97, W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997, pp. 583–588.
- D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” J. Mach. Learn. Res., vol. 3, no. null, p. 993–1022, mar 2003.
- M. N. Omidvar, X. Li, and X. Yao, “A review of population-based metaheuristics for large-scale black-box global optimization—part i,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 5, pp. 802–822, 2022.
- K. Li, Q. Liu, S. Yang, J. Cao, and G. Lu, “Cooperative optimization of dual multiagent system for optimal resource allocation,” IEEE Transactions on systems, man, and cybernetics: systems, vol. 50, no. 11, pp. 4676–4687, 2018.
- L. Sun, L. Lin, M. Gen, and H. Li, “A hybrid cooperative coevolution algorithm for fuzzy flexible job shop scheduling,” IEEE Transactions on Fuzzy Systems, vol. 27, no. 5, pp. 1008–1022, 2019.
- W. Shi, W.-N. Chen, S. Kwong, J. Zhang, H. Wang, T. Gu, H. Yuan, and J. Zhang, “A coevolutionary estimation of distribution algorithm for group insurance portfolio,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
- Y. Mei, M. N. Omidvar, X. Li, and X. Yao, “A competitive divide-and-conquer algorithm for unconstrained large-scale black-box optimization,” ACM Transactions on Mathematical Software (TOMS), 2016.
- M. N. Omidvar, M. Yang, Y. Mei, X. Li, and X. Yao, “Dg2: A faster and more accurate differential grouping for large-scale black-box optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 6, pp. 929–942, 2017.
- M. Yang, A. Zhou, C. Li, and X. Yao, “An efficient recursive differential grouping for large-scale continuous problems,” IEEE Transactions on Evolutionary Computation, vol. 25, no. 1, pp. 159–171, 2021.
- A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, “Meta-learning with latent embedding optimization,” arXiv preprint arXiv:1807.05960, 2018.
- A. Kabán, J. Bootkrajang, and R. J. Durrant, “Toward large-scale continuous eda: A random matrix theory perspective,” Evolutionary computation, vol. 24, no. 2, pp. 255–291, 2016.
- Z. Wang, F. Hutter, M. Zoghi, D. Matheson, and N. de Feitas, “Bayesian optimization in a billion dimensions via random embeddings,” Journal of Artificial Intelligence Research, vol. 55, pp. 361–387, 2016.
- H. Qian, Y.-Q. Hu, and Y. Yu, “Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings.” in IJCAI, 2016, pp. 1946–1952.
- Q. Yang, P. Yang, and K. Tang, “Parallel random embedding with negatively correlated search,” in International Conference on Swarm Intelligence. Springer, 2021, pp. 339–351.
- A. Gupta, Y.-S. Ong, and L. Feng, “Multifactorial evolution: toward evolutionary multitasking,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 3, pp. 343–357, 2016.
- K. C. Tan, L. Feng, and M. Jiang, “Evolutionary transfer optimization-a new frontier in evolutionary computation research,” IEEE Computational Intelligence Magazine, vol. 16, no. 1, pp. 22–33, 2021.
- A. Gupta, Y.-S. Ong, and L. Feng, “Insights on transfer optimization: Because experience is the best teacher,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 1, pp. 51–64, 2018.
- H. Qian and Y. Yu, “Scaling simultaneous optimistic optimization for high-dimensional non-convex functions with low effective dimensions.” in AAAI, 2016, pp. 2000–2006.
- K. Tu, J. Ma, P. Cui, J. Pei, and W. Zhu, “Autone: Hyperparameter optimization for massive network embedding,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 216–225.
- S. Narayanan, S. Silwal, P. Indyk, and O. Zamir, “Randomized dimensionality reduction for facility location and single-linkage clustering,” in International Conference on Machine Learning. PMLR, 2021, pp. 7948–7957.
- M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen, “Transfer learning-based dynamic multiobjective optimization algorithms,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 4, pp. 501–514, 2018.
- X. Tian and H. Niu, “A bi-objective model with sequential search algorithm for optimizing network-wide train timetables,” Computers & Industrial Engineering, vol. 127, pp. 1259–1272, 2019.
- X. Xue, K. Zhang, K. C. Tan, L. Feng, J. Wang, G. Chen, X. Zhao, L. Zhang, and J. Yao, “Affine transformation-enhanced multifactorial optimization for heterogeneous problems,” IEEE Transactions on Cybernetics, 2020.
- J. Zhong, L. Feng, W. Cai, and Y.-S. Ong, “Multifactorial genetic programming for symbolic regression problems,” IEEE transactions on systems, man, and cybernetics: systems, vol. 50, no. 11, pp. 4492–4505, 2018.
- Z. Liang, W. Liang, Z. Wang, X. Ma, L. Liu, and Z. Zhu, “Multiobjective evolutionary multitasking with two-stage adaptive knowledge transfer based on population distribution,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021.
- L. Feng, Q. Shang, Y. Hou, K. C. Tan, and Y.-S. Ong, “Multi-space evolutionary search for large-scale optimization,” arXiv preprint arXiv:2102.11693, 2021.
- K. Chen, B. Xue, M. Zhang, and F. Zhou, “An evolutionary multitasking-based feature selection method for high-dimensional classification,” IEEE Transactions on Cybernetics, 2020.
- L. Zhang, Y. Xie, J. Chen, L. Feng, C. Chen, and K. Liu, “A study on multiform multi-objective evolutionary optimization,” Memetic Computing, pp. 1–12, 2021.
- L. Feng, Y.-S. Ong, S. Jiang, and A. Gupta, “Autoencoding evolutionary search with learning across heterogeneous problems,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 5, pp. 760–772, 2017.
- L. Feng, L. Zhou, J. Zhong, A. Gupta, Y.-S. Ong, K.-C. Tan, and A. K. Qin, “Evolutionary multitasking via explicit autoencoding,” IEEE Transactions on Cybernetics, vol. 49, no. 9, pp. 3457–3470, 2019.
- Y. Feng, L. Feng, Y. Hou, and K. C. Tan, “Large-scale optimization via evolutionary multitasking assisted random embedding,” in 2020 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2020, pp. 1–8.
- M. Gong, Z. Tang, H. Li, and J. Zhang, “Evolutionary multitasking with dynamic resource allocating strategy,” IEEE Transactions on Evolutionary Computation, 2019.
- T. Wei and J. Zhong, “Towards generalized resource allocation on evolutionary multitasking for multi-objective optimization,” IEEE Computational Intelligence Magazine, vol. 16, no. 4, pp. 20–37, 2021.
- N. Zhang, A. Gupta, Z. Chen, and Y.-S. Ong, “Evolutionary machine learning with minions: A case study in feature selection,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 1, pp. 130–144, 2022.
- ——, “Multitask neuroevolution for reinforcement learning with long and short episodes,” IEEE Transactions on Cognitive and Developmental Systems, vol. 15, no. 3, pp. 1474–1486, 2023.
- Z. Chen, A. Gupta, L. Zhou, and Y.-S. Ong, “Scaling multiobjective evolution to large data with minions: A bayes-informed multitask approach,” IEEE Transactions on Cybernetics, pp. 1–14, 2022.
- R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of global optimization, vol. 11, no. 4, pp. 341–359, 1997.
- K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Syst., vol. 9, 1995.
- H. He, J. Boyd-Graber, K. Kwok, and H. Daumé III, “Opponent modeling in deep reinforcement learning,” in International conference on machine learning. PMLR, 2016, pp. 1804–1813.
- Y. Hou, Y.-S. Ong, J. Tang, and Y. Zeng, “Evolutionary multiagent transfer learning with model-based opponent behavior prediction,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 10, pp. 5962–5976, 2021.
- Y. Hou, M. Sun, W. Zhu, Y. Zeng, H. Piao, X. Chen, and Q. Zhang, “Behavior reasoning for opponent agents in multi-agent learning systems,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 5, pp. 1125–1136, 2022.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.