Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary Reinforcement Learning: A Systematic Review and Future Directions (2402.13296v1)

Published 20 Feb 2024 in cs.NE

Abstract: In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (118)
  1. L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,” Journal of artificial intelligence research, vol. 4, pp. 237–285, 1996.
  2. J. Morimoto and K. Doya, “Robust reinforcement learning,” Neural computation, vol. 17, no. 2, pp. 335–359, 2005.
  3. X. Zhao, L. Xia, L. Zhang, Z. Ding, D. Yin, and J. Tang, “Deep reinforcement learning for page-wise recommendations,” in Proceedings of the 12th ACM conference on recommender systems, 2018, pp. 95–103.
  4. M. G. P. de LACERDA, F. B. de Lima Neto, T. B. Ludermir, and H. Kuchen, “Out-of-the-box parameter control for evolutionary and swarm-based algorithms with distributed reinforcement learning,” Swarm Intelligence, pp. 1–45, 2023.
  5. Q. Liu, Y. Wang, and X. Liu, “Pns: Population-guided novelty search for reinforcement learning in hard exploration environments,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 5627–5634.
  6. D. E. Goldberg, “Cenetic algorithms in search,” Optimization, Machine Learning, 1989.
  7. K. Suri, X. Q. Shi, K. N. Plataniotis, and Y. A. Lawryshyn, “Maximum mutation reinforcement learning for scalable control,” arXiv preprint arXiv:2007.13690, 2020.
  8. Z. Zhang, Q. Tang, M. Chica, and Z. Li, “Reinforcement learning-based multiobjective evolutionary algorithm for mixed-model multimanned assembly line balancing under uncertain demand,” IEEE Transactions on Cybernetics, 2023.
  9. O. Nilsson and A. Cully, “Policy gradient assisted map-elites,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2021, pp. 866–875.
  10. S. Khadka and K. Tumer, “Evolution-guided policy gradient in reinforcement learning,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  11. L. Shi, S. Li, Q. Zheng, M. Yao, and G. Pan, “Efficient novelty search through deep reinforcement learning,” IEEE Access, vol. 8, pp. 128 809–128 818, 2020.
  12. S. Khadka, S. Majumdar, T. Nassar, Z. Dwiel, E. Tumer, S. Miret, Y. Liu, and K. Tumer, “Collaborative evolutionary reinforcement learning,” in International conference on machine learning.   PMLR, 2019, pp. 3341–3350.
  13. S. Lü, S. Han, W. Zhou, and J. Zhang, “Recruitment-imitation mechanism for evolutionary reinforcement learning,” Information Sciences, vol. 553, pp. 172–188, 2021.
  14. J. K. Franke, G. Köhler, A. Biedenkapp, and F. Hutter, “Sample-efficient automated deep reinforcement learning,” arXiv preprint arXiv:2009.01555, 2020.
  15. A. Gupta, S. Savarese, S. Ganguli, and L. Fei-Fei, “Embodied intelligence via learning and evolution,” Nature communications, vol. 12, no. 1, p. 5721, 2021.
  16. T. Pierrot, V. Macé, G. Cideron, N. Perrin, K. Beguir, and O. Sigaud, “Sample efficient quality diversity for neural continuous control,” 2020.
  17. E. Marchesini, D. Corsi, and A. Farinelli, “Genetic soft updates for policy evolution in deep reinforcement learning,” in International Conference on Learning Representations, 2020.
  18. A. Eriksson, G. Capi, and K. Doya, “Evolution of meta-parameters in reinforcement learning algorithm,” in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), vol. 1.   IEEE, 2003, pp. 412–417.
  19. B. Tjanaka, M. C. Fontaine, J. Togelius, and S. Nikolaidis, “Approximating gradients for differentiable quality diversity in reinforcement learning,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2022, pp. 1102–1111.
  20. A. Y. Majid, S. Saaybi, V. Francois-Lavet, R. V. Prasad, and C. Verhoeven, “Deep reinforcement learning versus evolution strategies: a comparative survey,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  21. Y. Wang, K. Xue, and C. Qian, “Evolutionary diversity optimization with clustering-based selection for reinforcement learning,” in International Conference on Learning Representations, 2021.
  22. O. Sigaud, “Combining evolution and deep reinforcement learning for policy search: a survey,” ACM Transactions on Evolutionary Learning, vol. 3, no. 3, pp. 1–20, 2023.
  23. H. Bai, R. Cheng, and Y. Jin, “Evolutionary reinforcement learning: A survey,” Intelligent Computing, vol. 2, p. 0025, 2023.
  24. P. A. Vikhar, “Evolutionary algorithms: A critical review and its future prospects,” in 2016 International conference on global trends in signal processing, information computing and communication (ICGTSPICC).   IEEE, 2016, pp. 261–265.
  25. F. Hoffmeister and T. Bäck, “Genetic algorithms and evolution strategies: Similarities and differences,” in International conference on parallel problem solving from nature.   Springer, 1990, pp. 455–469.
  26. W. Vent, “Rechenberg, ingo, evolutionsstrategie—optimierung technischer systeme nach prinzipien der biologischen evolution. 170 s. mit 36 abb. frommann-holzboog-verlag. stuttgart 1973. broschiert,” 1975.
  27. Q. Zhu, X. Wu, Q. Lin, L. Ma, J. Li, Z. Ming, and J. Chen, “A survey on evolutionary reinforcement learning algorithms,” Neurocomputing, vol. 556, p. 126628, 2023.
  28. A. Slowik and H. Kwasnicka, “Evolutionary algorithms and their applications to engineering problems,” Neural Computing and Applications, vol. 32, pp. 12 363–12 379, 2020.
  29. S. L. Ho and S. Yang, “The cross-entropy method and its application to inverse problems,” IEEE transactions on magnetics, vol. 46, no. 8, pp. 3401–3404, 2010.
  30. Z. I. Botev, D. P. Kroese, R. Y. Rubinstein, and P. L’Ecuyer, “The cross-entropy method for optimization,” in Handbook of statistics.   Elsevier, 2013, vol. 31, pp. 35–59.
  31. K. Huang, S. Lale, U. Rosolia, Y. Shi, and A. Anandkumar, “Cem-gd: Cross-entropy method with gradient descent planner for model-based reinforcement learning,” arXiv preprint arXiv:2112.07746, 2021.
  32. M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan et al., “Population based training of neural networks,” arXiv preprint arXiv:1711.09846, 2017.
  33. D. Hein, S. Udluft, and T. A. Runkler, “Interpretable policies for reinforcement learning by genetic programming,” Engineering Applications of Artificial Intelligence, vol. 76, pp. 158–169, 2018.
  34. E. Sachdeva, S. Khadka, S. Majumdar, and K. Tumer, “Maedys: Multiagent evolution via dynamic skill selection,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2021, pp. 163–171.
  35. L. Shi, S. Li, L. Cao, L. Yang, G. Zheng, and G. Pan, “Fidi-rl: Incorporating deep reinforcement learning with finite-difference policy search for efficient learning of continuous control,” arXiv preprint arXiv:1907.00526, 2019.
  36. Y. Lin, H. Chen, W. Xia, F. Lin, P. Wu, Z. Wang, and Y. Li, “A comprehensive survey on deep learning techniques in educational data mining,” arXiv preprint arXiv:2309.04761, 2023.
  37. X. Chen, L. Yao, J. McAuley, G. Zhou, and X. Wang, “A survey of deep reinforcement learning in recommender systems: A systematic review and future directions,” arXiv preprint arXiv:2109.03540, 2021.
  38. Y. Lin, Y. Liu, F. Lin, L. Zou, P. Wu, W. Zeng, H. Chen, and C. Miao, “A survey on reinforcement learning for recommender systems,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
  39. O. S. Ajani and R. Mallipeddi, “Adaptive evolution strategy with ensemble of mutations for reinforcement learning,” Knowledge-Based Systems, vol. 245, p. 108624, 2022.
  40. Y. Tang and K. Choromanski, “Online hyper-parameter tuning in off-policy learning via evolutionary strategies,” arXiv preprint arXiv:2006.07554, 2020.
  41. Z. Chen, Y. Zhou, X. He, and S. Jiang, “A restart-based rank-1 evolution strategy for reinforcement learning.” in IJCAI, 2019, pp. 2130–2136.
  42. P. Yang, H. Zhang, Y. Yu, M. Li, and K. Tang, “Evolutionary reinforcement learning via cooperative coevolutionary negatively correlated search,” Swarm and Evolutionary Computation, vol. 68, p. 100974, 2022.
  43. C. Su, C. Zhang, D. Xia, B. Han, C. Wang, G. Chen, and L. Xie, “Evolution strategies-based optimized graph reinforcement learning for solving dynamic job shop scheduling problem,” Applied Soft Computing, vol. 145, p. 110596, 2023.
  44. H. Sun, Z. Xu, Y. Song, M. Fang, J. Xiong, B. Dai, and B. Zhou, “Zeroth-order supervised policy improvement,” arXiv preprint arXiv:2006.06600, 2020.
  45. R. Houthooft, Y. Chen, P. Isola, B. Stadie, F. Wolski, O. Jonathan Ho, and P. Abbeel, “Evolved policy gradients,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  46. A. Callaghan, K. Mason, and P. Mannion, “Evolutionary strategy guided reinforcement learning via multibuffer communication,” arXiv preprint arXiv:2306.11535, 2023.
  47. A. D. Martinez, J. Del Ser, E. Osaba, and F. Herrera, “Adaptive multifactorial evolutionary optimization for multitask reinforcement learning,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 2, pp. 233–247, 2021.
  48. W. Li, S. He, X. Mao, B. Li, C. Qiu, J. Yu, F. Peng, and X. Tan, “Multi-agent evolution reinforcement learning method for machining parameters optimization based on bootstrap aggregating graph attention network simulated environment,” Journal of Manufacturing Systems, vol. 67, pp. 424–438, 2023.
  49. B. Zheng and R. Cheng, “Rethinking population-assisted off-policy reinforcement learning,” arXiv preprint arXiv:2305.02949, 2023.
  50. S. Elfwing, E. Uchibe, K. Doya, and H. I. Christensen, “Co-evolution of shaping rewards and meta-parameters in reinforcement learning,” Adaptive Behavior, vol. 16, no. 6, pp. 400–412, 2008.
  51. T. Li, Y. Meng, and L. Tang, “Scheduling of continuous annealing with a multi-objective differential evolution algorithm based on deep reinforcement learning,” IEEE Transactions on Automation Science and Engineering, 2023.
  52. P. Yang, L. Zhang, H. Liu, and G. Li, “Reducing idleness in financial cloud via multi-objective evolutionary reinforcement learning based load balancer,” arXiv preprint arXiv:2305.03463, 2023.
  53. J. J. Garau-Luis, Y. Miao, J. D. Co-Reyes, A. Parisi, J. Tan, E. Real, and A. Faust, “Multi-objective evolution for generalizable policy gradient algorithms,” in ICLR 2022 Workshop on Generalizable Policy Learning in Physical World, 2022.
  54. J. Li and T. Zhou, “Evolutionary multi agent deep meta reinforcement learning method for swarm intelligence energy management of isolated multi area microgrid with internet of things,” IEEE Internet of Things Journal, 2023.
  55. C. Hu, J. Pei, J. Liu, and X. Yao, “Evolving constrained reinforcement learning policy,” arXiv preprint arXiv:2304.09869, 2023.
  56. C. Bodnar, B. Day, and P. Lió, “Proximal distilled evolutionary reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3283–3290.
  57. Y. Wang, T. Zhang, Y. Chang, X. Wang, B. Liang, and B. Yuan, “A surrogate-assisted controller for expensive evolutionary reinforcement learning,” Information Sciences, vol. 616, pp. 539–557, 2022.
  58. T. Pierrot, V. Macé, F. Chalumeau, A. Flajolet, G. Cideron, K. Beguir, A. Cully, O. Sigaud, and N. Perrin-Gilbert, “Diversity policy gradient for sample efficient quality-diversity optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2022, pp. 1075–1083.
  59. J. Hao, P. Li, H. Tang, Y. Zheng, X. Fu, and Z. Meng, “Erl-re22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT: Efficient evolutionary reinforcement learning with shared state representation and individual policy representation,” arXiv preprint arXiv:2210.17375, 2022.
  60. D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke et al., “Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation,” arXiv preprint arXiv:1806.10293, 2018.
  61. L. Shao, Y. You, M. Yan, S. Yuan, Q. Sun, and J. Bohg, “Grac: Self-guided and self-regularized actor-critic,” in Conference on Robot Learning.   PMLR, 2022, pp. 267–276.
  62. A. Pourchot and O. Sigaud, “Cem-rl: Combining evolutionary and gradient-based methods for policy search,” arXiv preprint arXiv:1810.01222, 2018.
  63. N. Kim, H. Baek, and H. Shin, “Pgps: Coupling policy gradient with population-based search,” 2020.
  64. H. Zheng, P. Wei, J. Jiang, G. Long, Q. Lu, and C. Zhang, “Cooperative heterogeneous deep reinforcement learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 17 455–17 465, 2020.
  65. Z. Shi and S. P. Singh, “Soft actor-critic with cross-entropy policy optimization,” arXiv preprint arXiv:2112.11115, 2021.
  66. J. Liu and L. Feng, “Diversity evolutionary policy deep reinforcement learning,” Computational Intelligence and Neuroscience, vol. 2021, pp. 1–11, 2021.
  67. Z.-Z. Wang, K. Zhang, G.-D. Chen, J.-D. Zhang, W.-D. Wang, H.-C. Wang, L.-M. Zhang, X. Yan, and J. Yao, “Evolutionary-assisted reinforcement learning for reservoir real-time production optimization under uncertainty,” Petroleum Science, vol. 20, no. 1, pp. 261–276, 2023.
  68. Y. Ma, T. Liu, B. Wei, Y. Liu, K. Xu, and W. Li, “Evolutionary action selection for gradient-based policy learning,” in International Conference on Neural Information Processing.   Springer, 2022, pp. 579–590.
  69. W. Jung, G. Park, and Y. Sung, “Population-guided parallel policy search for reinforcement learning,” arXiv preprint arXiv:2001.02907, 2020.
  70. T. Doan, B. Mazoure, M. Abdar, A. Durand, J. Pineau, and R. D. Hjelm, “Attraction-repulsion actor-critic for continuous control reinforcement learning,” arXiv preprint arXiv:1909.07543, 2019.
  71. E. Marchesini, D. Corsi, and A. Farinelli, “Exploring safer behaviors for deep reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, 2022, pp. 7701–7709.
  72. S. Majumdar, S. Khadka, S. Miret, S. McAleer, and K. Tumer, “Evolutionary reinforcement learning for sample-efficient multiagent coordination,” in International Conference on Machine Learning.   PMLR, 2020, pp. 6651–6660.
  73. Q. Long, Z. Zhou, A. Gupta, F. Fang, Y. Wu, and X. Wang, “Evolutionary population curriculum for scaling multi-agent reinforcement learning,” arXiv preprint arXiv:2003.10423, 2020.
  74. R. Shen, Y. Zheng, J. Hao, Z. Meng, Y. Chen, C. Fan, and Y. Liu, “Generating behavior-diverse game ais with evolutionary multi-objective deep reinforcement learning.” in IJCAI, 2020, pp. 3371–3377.
  75. F. C. Fernandez and W. Caarls, “Parameters tuning and optimization for reinforcement learning algorithms using evolutionary computing,” in 2018 International Conference on Information Systems and Computer Science (INCISCOS).   IEEE, 2018, pp. 301–305.
  76. S. Kamio and H. Iba, “Adaptation technique for integrating genetic programming and reinforcement learning for real robots,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 3, pp. 318–333, 2005.
  77. J. D. Co-Reyes, Y. Miao, D. Peng, E. Real, S. Levine, Q. V. Le, H. Lee, and A. Faust, “Evolving reinforcement learning algorithms,” arXiv preprint arXiv:2101.03958, 2021.
  78. A. AbuZekry, I. Sobh, M. Hadhoud, and M. Fayek, “Comparative study of neuroevolution algorithms in reinforcement learning for self-driving cars,” European Journal of Engineering Science and Technology, vol. 2, no. 4, pp. 60–71, 2019.
  79. S. Kelly, T. Voegerl, W. Banzhaf, and C. Gondro, “Evolving hierarchical memory-prediction machines in multi-task reinforcement learning,” Genetic Programming and Evolvable Machines, vol. 22, pp. 573–605, 2021.
  80. S. Girgin and P. Preux, “Feature discovery in reinforcement learning using genetic programming,” in European conference on genetic programming.   Springer, 2008, pp. 218–229.
  81. A. Sehgal, H. La, S. Louis, and H. Nguyen, “Deep reinforcement learning using genetic algorithm for parameter optimization,” in 2019 Third IEEE International Conference on Robotic Computing (IRC).   IEEE, 2019, pp. 596–601.
  82. A. A. Aydeniz, R. Loftin, and K. Tumer, “Novelty seeking multiagent evolutionary reinforcement learning,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2023, pp. 402–410.
  83. S. Zhu, F. Belardinelli, and B. G. León, “Evolutionary reinforcement learning for sparse rewards,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021, pp. 1508–1512.
  84. P. Li, J. Hao, H. Tang, Y. Zheng, and X. Fu, “Race: improve multi-agent reinforcement learning with representation asymmetry and collaborative evolution,” in International Conference on Machine Learning.   PMLR, 2023, pp. 19 490–19 503.
  85. S. Chang, J. Yang, J. Choi, and N. Kwak, “Genetic-gated networks for deep reinforcement learning,” Advances in neural information processing systems, vol. 31, 2018.
  86. Y. Tang, “Guiding evolutionary strategies with off-policy actor-critic.” in AAMAS, 2021, pp. 1317–1325.
  87. R. Simmons-Edler, B. Eisner, E. Mitchell, S. Seung, and D. Lee, “Q-learning for continuous actions with cross-entropy guided policies,” arXiv preprint arXiv:1903.10605, 2019.
  88. M. Gao, X. Feng, H. Yu, and X. Li, “An efficient evolutionary algorithm based on deep reinforcement learning for large-scale sparse multiobjective optimization,” Applied Intelligence, pp. 1–24, 2023.
  89. H.-L. Tran, L. Doan, N. H. Luong, and H. T. T. Binh, “A two-stage multi-objective evolutionary reinforcement learning framework for continuous robot control,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2023, pp. 577–585.
  90. J. Stork, M. Zaefferer, N. Eisler, P. Tichelmann, T. Bartz-Beielstein, and A. Eiben, “Behavior-based neuroevolutionary training in reinforcement learning,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021, pp. 1753–1761.
  91. Y. Song, L. Wei, Q. Yang, J. Wu, L. Xing, and Y. Chen, “Rl-ga: A reinforcement learning-based genetic algorithm for electromagnetic detection satellite scheduling problem,” Swarm and Evolutionary Computation, vol. 77, p. 101236, 2023.
  92. J. Stork, M. Zaefferer, T. Bartz-Beielstein, and A. Eiben, “Surrogate models for enhancing the efficiency of neuroevolution in reinforcement learning,” in Proceedings of the genetic and evolutionary computation conference, 2019, pp. 934–942.
  93. F. Espositi and A. Bonarini, “Gradient bias to solve the generalization limit of genetic algorithms through hybridization with reinforcement learning,” in Machine Learning, Optimization, and Data Science: 6th International Conference, LOD 2020, Siena, Italy, July 19–23, 2020, Revised Selected Papers, Part I 6.   Springer, 2020, pp. 273–284.
  94. Y. Li, “Reinforcement learning in practice: Opportunities and challenges,” arXiv preprint arXiv:2202.11296, 2022.
  95. A. Rasouli and J. K. Tsotsos, “Autonomous vehicles that interact with pedestrians: A survey of theory and practice,” IEEE transactions on intelligent transportation systems, vol. 21, no. 3, pp. 900–918, 2019.
  96. Y. Bai, H. Zhao, X. Zhang, Z. Chang, R. Jäntti, and K. Yang, “Towards autonomous multi-uav wireless network: A survey of reinforcement learning-based approaches,” IEEE Communications Surveys & Tutorials, 2023.
  97. J.-Y. Li, Z.-H. Zhan, K. C. Tan, and J. Zhang, “A meta-knowledge transfer-based differential evolution for multitask optimization,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 4, pp. 719–734, 2021.
  98. A. Aleti and I. Moser, “A systematic literature review of adaptive parameter control methods for evolutionary algorithms,” ACM Computing Surveys (CSUR), vol. 49, no. 3, pp. 1–35, 2016.
  99. Z.-H. Zhan, Z.-J. Wang, H. Jin, and J. Zhang, “Adaptive distributed differential evolution,” IEEE transactions on cybernetics, vol. 50, no. 11, pp. 4633–4647, 2019.
  100. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  101. H. Shi, B. Zhou, H. Zeng, F. Wang, Y. Dong, J. Li, K. Wang, H. Tian, and M. Q.-H. Meng, “Reinforcement learning with evolutionary trajectory generator: A general approach for quadrupedal locomotion,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3085–3092, 2022.
  102. K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” Journal of Big data, vol. 3, no. 1, pp. 1–40, 2016.
  103. L. A. Ajao and S. T. Apeh, “Secure edge computing vulnerabilities in smart cities sustainability using petri net and genetic algorithm-based reinforcement learning,” Intelligent Systems with Applications, vol. 18, p. 200216, 2023.
  104. J. Hong, Z. Zhu, S. Yu, Z. Wang, H. H. Dodge, and J. Zhou, “Federated adversarial debiasing for fair and transferable representations,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 617–627.
  105. A. Petrović, M. Nikolić, S. Radovanović, B. Delibašić, and M. Jovanović, “Fair: Fair adversarial instance re-weighting,” Neurocomputing, vol. 476, pp. 14–37, 2022.
  106. C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International conference on machine learning.   PMLR, 2017, pp. 1126–1135.
  107. S. M. Elsayed, R. A. Sarker, and D. L. Essam, “An improved self-adaptive differential evolution algorithm for optimization problems,” IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 89–99, 2012.
  108. L. Peng, Z. Yuan, G. Dai, M. Wang, and Z. Tang, “Reinforcement learning-based hybrid differential evolution for global optimization of interplanetary trajectory design,” Swarm and Evolutionary Computation, p. 101351, 2023.
  109. S. Niu, Y. Liu, J. Wang, and H. Song, “A decade survey of transfer learning (2010–2020),” IEEE Transactions on Artificial Intelligence, vol. 1, no. 2, pp. 151–166, 2020.
  110. D. Lian, Y. Zheng, Y. Xu, Y. Lu, L. Lin, P. Zhao, J. Huang, and S. Gao, “Towards fast adaptation of neural architectures with meta learning,” in International Conference on Learning Representations, 2019.
  111. T. Zhang, L. Yu, D. Yue, C. Dou, X. Xie, and L. Chen, “Coordinated voltage regulation of high renewable-penetrated distribution networks: an evolutionary curriculum-based deep reinforcement learning approach,” International Journal of Electrical Power & Energy Systems, vol. 149, p. 108995, 2023.
  112. X. Yang, N. Chen, S. Zhang, X. Zhou, L. Zhang, and T. Qiu, “An evolutionary reinforcement learning scheme for iot robustness,” in 2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD).   IEEE, 2023, pp. 756–761.
  113. L. Malandri, F. Mercorio, M. Mezzanzanica, and A. Seveso, “Model-contrastive explanations through symbolic reasoning,” Decision Support Systems, p. 114040, 2023.
  114. S. Porebski, “Evaluation of fuzzy membership functions for linguistic rule-based classifier focused on explainability, interpretability and reliability,” Expert Systems with Applications, vol. 199, p. 117116, 2022.
  115. X. Shao, H. Wang, X. Zhu, F. Xiong, T. Mu, and Y. Zhang, “Effect: Explainable framework for meta-learning in automatic classification algorithm selection,” Information Sciences, vol. 622, pp. 211–234, 2023.
  116. B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Computing Surveys, 2021.
  117. E. Y. Chang, “Examining gpt-4: Capabilities, implications and future directions,” in The 10th International Conference on Computational Science and Computational Intelligence, 2023.
  118. S. Zhong, Z. Huang, W. Wen, J. Qin, and L. Lin, “Sur-adapter: Enhancing text-to-image pre-trained diffusion models with large language models,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 567–578.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuanguo Lin (5 papers)
  2. Fan Lin (15 papers)
  3. Guorong Cai (5 papers)
  4. Hong Chen (230 papers)
  5. Lixin Zou (22 papers)
  6. Pengcheng Wu (25 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com