Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary Computation in the Era of Large Language Model: Survey and Roadmap (2401.10034v3)

Published 18 Jan 2024 in cs.NE, cs.AI, and cs.CL

Abstract: LLMs have not only revolutionized natural language processing but also extended their prowess to various domains, marking a significant stride towards artificial general intelligence. The interplay between LLMs and evolutionary algorithms (EAs), despite differing in objectives and methodologies, share a common pursuit of applicability in complex problems. Meanwhile, EA can provide an optimization framework for LLM's further enhancement under black-box settings, empowering LLM with flexible global search capacities. On the other hand, the abundant domain knowledge inherent in LLMs could enable EA to conduct more intelligent searches. Furthermore, the text processing and generative capabilities of LLMs would aid in deploying EAs across a wide range of tasks. Based on these complementary advantages, this paper provides a thorough review and a forward-looking roadmap, categorizing the reciprocal inspiration into two main avenues: LLM-enhanced EA and EA-enhanced LLM. Some integrated synergy methods are further introduced to exemplify the complementarity between LLMs and EAs in diverse scenarios, including code generation, software engineering, neural architecture search, and various generation tasks. As the first comprehensive review focused on the EA research in the era of LLMs, this paper provides a foundational stepping stone for understanding the collaborative potential of LLMs and EAs. The identified challenges and future directions offer guidance for researchers and practitioners to unlock the full potential of this innovative collaboration in propelling advancements in optimization and artificial intelligence. We have created a GitHub repository to index the relevant papers: https://github.com/wuxingyu-ai/LLM4EC.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (116)
  1. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  2. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” arXiv preprint arXiv:1907.11692, 2019.
  3. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020.
  4. OpenAI, “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
  5. F. Gilardi, M. Alizadeh, and M. Kubli, “Chatgpt outperforms crowd-workers for text-annotation tasks,” arXiv preprint arXiv:2303.15056, 2023.
  6. Y. Liu, D. Iter, Y. Xu, S. Wang, R. Xu, and C. Zhu, “Gpteval: Nlg evaluation using gpt-4 with better human alignment,” arXiv preprint arXiv:2303.16634, 2023.
  7. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Proceedings of the Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.
  8. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., “Palm: Scaling language modeling with pathways,” Journal of Machine Learning Research, vol. 24, no. 240, pp. 1–113, 2023.
  9. J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., “Emergent abilities of large language models,” Transactions on Machine Learning Research, 2022.
  10. L. Wu, Z. Zheng, Z. Qiu, H. Wang, H. Gu, T. Shen, C. Qin, C. Zhu, H. Zhu, Q. Liu et al., “A survey on large language models for recommendation,” arXiv preprint arXiv:2305.19860, 2023.
  11. K. Malinka, M. Peresíni, A. Firc, O. Hujnák, and F. Janus, “On the educational impact of chatgpt: Is artificial intelligence ready to obtain a university degree?” in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1, 2023, pp. 47–53.
  12. Z. Li, C. Wang, Z. Liu, H. Wang, D. Chen, S. Wang, and C. Gao, “Cctest: Testing and repairing code completion systems,” in Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering.   IEEE, 2023, pp. 1238–1250.
  13. C. Yang, X. Wang, Y. Lu, H. Liu, Q. V. Le, D. Zhou, and X. Chen, “Large language models as optimizers,” arXiv preprint arXiv:2309.03409, 2023.
  14. E. Meyerson, M. J. Nelson, H. Bradley, A. Moradi, A. K. Hoover, and J. Lehman, “Language model crossover: Variation through few-shot prompting,” arXiv preprint arXiv:2302.12170, 2023.
  15. S. Liu, C. Chen, X. Qu, K. Tang, and Y.-S. Ong, “Large language models as evolutionary optimizers,” arXiv preprint arXiv:2310.19046, 2023.
  16. T. Bäck, D. B. Fogel, and Z. Michalewicz, “Handbook of evolutionary computation,” Release, vol. 97, no. 1, p. B1, 1997.
  17. B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–40, 2023.
  18. P. P. Ray, “Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, 2023.
  19. P. Pošík, W. Huyer, and L. Pál, “A comparison of global search algorithms for continuous black box optimization,” Evolutionary Computation, vol. 20, no. 4, pp. 509–541, 2012.
  20. Q. Guo, R. Wang, J. Guo, B. Li, K. Song, X. Tan, G. Liu, J. Bian, and Y. Yang, “Connecting large language models with evolutionary algorithms yields powerful prompt optimizers,” arXiv preprint arXiv:2309.08532, 2023.
  21. C. Fernando, D. Banarse, H. Michalewski, S. Osindero, and T. Rocktäschel, “Promptbreeder: Self-referential self-improvement via prompt evolution,” arXiv preprint arXiv:2309.16797, 2023.
  22. Y. B. Li and K. Wu, “Spell: Semantic prompt evolution based on a llm,” arXiv preprint arXiv:2310.01260, 2023.
  23. J. Gao, H. Xu, H. Shi, X. Ren, L. Philip, X. Liang, X. Jiang, and Z. Li, “Autobert-zero: Evolving bert backbone from scratch,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 10, 2022, pp. 10 663–10 671.
  24. V. Ganesan, G. Ramesh, and P. Kumar, “Supershaper: Task-agnostic super pre-training of bert models with variable hidden dimensions,” arXiv preprint arXiv:2110.04711, 2021.
  25. Y. Yin, C. Chen, L. Shang, X. Jiang, X. Chen, and Q. Liu, “Autotinybert: Automatic hyper-parameter optimization for efficient pre-trained language models,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021, pp. 5146–5157.
  26. Z.-H. Zhan, L. Shi, K. C. Tan, and J. Zhang, “A survey on evolutionary computation for complex continuous optimization,” Artificial Intelligence Review, pp. 1–52, 2022.
  27. D. Molina, A. LaTorre, and F. Herrera, “An insight into bio-inspired and evolutionary algorithms for global optimization: review, analysis, and lessons learnt over a decade of competitions,” Cognitive Computation, vol. 10, pp. 517–544, 2018.
  28. J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu, “Harnessing the power of llms in practice: A survey on chatgpt and beyond,” arXiv preprint arXiv:2304.13712, 2023.
  29. Y. Deng, C. S. Xia, H. Peng, C. Yang, and L. Zhang, “Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models,” in Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2023, pp. 423–435.
  30. B. Romera-Paredes, M. Barekatain, A. Novikov, M. Balog, M. P. Kumar, E. Dupont, F. J. Ruiz, J. S. Ellenberg, P. Wang, O. Fawzi et al., “Mathematical discoveries from program search with large language models,” Nature, pp. 1–3, 2023.
  31. J. Petke, S. O. Haraldsson, M. Harman, W. B. Langdon, D. R. White, and J. R. Woodward, “Genetic improvement of software: a comprehensive survey,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 3, pp. 415–432, 2017.
  32. S. Shyam, G.-D. Miguel, F. Matthias, G. Claire, N. Elias, and R. Sebastian, “Mariogpt: Open-ended text2level generation through large language models,” arXiv preprint arXiv:2302.05981, 2023.
  33. Y. G. Woldesenbet and G. G. Yen, “Dynamic evolutionary algorithm with variable relocation,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 500–513, 2009.
  34. L. Araujo, “How evolutionary algorithms are applied to statistical natural language processing,” Artificial Intelligence Review, vol. 28, pp. 275–303, 2007.
  35. L. Xiao and X. Chen, “Enhancing llm with evolutionary fine tuning for news summary generation,” arXiv preprint arXiv:2307.02839, 2023.
  36. Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, and A. Anandkumar, “Eureka: Human-level reward design via coding large language models,” arXiv preprint arXiv:2310.12931, 2023.
  37. Z. Chen, L. Cao, S. Madden, T. Kraska, Z. Shang, J. Fan, N. Tang, Z. Gu, C. Liu, and M. Cafarella, “Seed: Domain-specific data curation with large language models,” arXiv preprint arXiv:2310.00749, 2023.
  38. C. G. Johnson, “New directions in fitness evaluation,” Genetic Programming and Evolvable Machines, vol. 24, no. 2, p. 22, 2023.
  39. J. Lehman, J. Gordon, S. Jain, K. Ndousse, C. Yeh, and K. O. Stanley, “Evolution through large models,” in Handbook of Evolutionary Machine Learning.   Springer, 2023, pp. 331–366.
  40. P.-F. Guo, Y.-H. Chen, Y.-D. Tsai, and S.-D. Lin, “Towards optimizing with large language models,” arXiv preprint arXiv:2310.05204, 2023.
  41. A. E. Brownlee, J. Callan, K. Even-Mendoza, A. Geiger, C. Hanna, J. Petke, F. Sarro, and D. Sobania, “Enhancing genetic improvement mutations using large language models,” in Proceedings of the International Symposium on Search Based Software Engineering.   Springer, 2023, pp. 153–159.
  42. F. Liu, X. Lin, Z. Wang, S. Yao, X. Tong, M. Yuan, and Q. Zhang, “Large language model for multi-objective evolutionary optimization,” arXiv preprint arXiv:2310.12541, 2023.
  43. K. Qiao, K. Yu, B. Qu, J. Liang, C. Yue, and X. Ban, “Feature extraction for recommendation of constrained multi-objective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, 2022.
  44. Y. Tian, S. Peng, T. Rodemann, X. Zhang, and Y. Jin, “Automated selection of evolutionary multi-objective optimization algorithms,” in Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence.   IEEE, 2019, pp. 3225–3232.
  45. X. Wu, Y. Zhong, J. Wu, and K. C. Tan, “As-llm: When algorithm selection meets large language model,” arXiv preprint arXiv:2311.13184, 2023.
  46. M. Pluhacek, A. Kazikova, T. Kadavy, A. Viktorin, and R. Senkerik, “Leveraging large language models for the generation of novel metaheuristic optimization algorithms,” in Proceedings of the Companion Conference on Genetic and Evolutionary Computation, 2023, pp. 1812–1820.
  47. A. AhmadiTeshnizi, W. Gao, and M. Udell, “Optimus: Optimization modeling using mip solvers and large language models,” arXiv preprint arXiv:2310.06116, 2023.
  48. F. Liu, X. Tong, M. Yuan, and Q. Zhang, “Algorithm evolution using large language model,” arXiv preprint arXiv:2311.15249, 2023.
  49. F. Liu, X. Tong, M. Yuan, X. Lin, F. Luo, Z. Wang, Z. Lu, and Q. Zhang, “An example of evolutionary computation+ large language model beating human: Design of efficient guided local search,” arXiv preprint arXiv:2401.02051, 2024.
  50. H. Bradley, H. Fan, T. Galanos, R. Zhou, D. Scott, and J. Lehman, “The openelm library: Leveraging progress in language models for novel evolutionary algorithms.”
  51. H. Chen, G. E. Constante-Flores, and C. Li, “Diagnosing infeasible optimization problems using large language models,” arXiv preprint arXiv:2308.12923, 2023.
  52. H. Yang and K. Li, “Instoptima: Evolutionary multi-objective instruction optimization via large language model-based instruction operators,” arXiv preprint arXiv:2310.17630, 2023.
  53. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
  54. T. Sun, Y. Shao, H. Qian, X. Huang, and X. Qiu, “Black-box tuning for language-model-as-a-service,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2022, pp. 20 841–20 855.
  55. N. Hansen, S. D. Müller, and P. Koumoutsakos, “Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es),” Evolutionary Computation, vol. 11, no. 1, pp. 1–18, 2003.
  56. T. Sun, Z. He, H. Qian, Y. Zhou, X.-J. Huang, and X. Qiu, “Bbtv2: towards a gradient-free future with large language models,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 3916–3930.
  57. Y. Chai, S. Wang, Y. Sun, H. Tian, H. Wu, and H. Wang, “Clip-tuning: Towards derivative-free prompt learning with a mixture of rewards,” in Findings of the Association for Computational Linguistics: EMNLP 2022, 2022, pp. 108–117.
  58. C. Xu, Q. Sun, K. Zheng, X. Geng, P. Zhao, J. Feng, C. Tao, and D. Jiang, “Wizardlm: Empowering large language models to follow complex instructions,” arXiv preprint arXiv:2304.12244, 2023.
  59. D. Kang, X. Li, I. Stoica, C. Guestrin, M. Zaharia, and T. Hashimoto, “Exploiting programmatic behavior of llms: Dual-use through standard security attacks,” arXiv preprint arXiv:2302.05733, 2023.
  60. D. Ganguli, L. Lovitt, J. Kernion, A. Askell, Y. Bai, S. Kadavath, B. Mann, E. Perez, N. Schiefer, K. Ndousse et al., “Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned,” arXiv preprint arXiv:2209.07858, 2022.
  61. N. Jain, A. Schwarzschild, Y. Wen, G. Somepalli, J. Kirchenbauer, P.-y. Chiang, M. Goldblum, A. Saha, J. Geiping, and T. Goldstein, “Baseline defenses for adversarial attacks against aligned language models,” arXiv preprint arXiv:2309.00614, 2023.
  62. X. Liu, N. Xu, M. Chen, and C. Xiao, “Autodan: Generating stealthy jailbreak prompts on aligned large language models,” arXiv preprint arXiv:2310.04451, 2023.
  63. R. Lapid, R. Langberg, and M. Sipper, “Open sesame! universal black box jailbreaking of large language models,” arXiv preprint arXiv:2309.01446, 2023.
  64. Y. Liu, Y. Sun, B. Xue, M. Zhang, G. G. Yen, and K. C. Tan, “A survey on evolutionary neural architecture search,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
  65. X. Zhou, A. K. Qin, Y. Sun, and K. C. Tan, “A survey of advances in evolutionary neural architecture search,” in Proceedings of the 2021 IEEE congress on evolutionary computation.   IEEE, 2021, pp. 950–957.
  66. D. So, Q. Le, and C. Liang, “The evolved transformer,” in Proceedings of the International Conference on Machine Learning.   PMLR, 2019, pp. 5877–5886.
  67. M. Javaheripi, G. de Rosa, S. Mukherjee, S. Shah, T. Religa, C. C. Teodoro Mendes, S. Bubeck, F. Koushanfar, and D. Dey, “Litetransformersearch: Training-free neural architecture search for efficient language models,” in Proceedings of the Advances Conference in Neural Information Processing Systems, 2022, pp. 24 254–24 267.
  68. S. Zhang, C. Gong, L. Wu, X. Liu, and M. Zhou, “Automl-gpt: Automatic machine learning with gpt,” arXiv preprint arXiv:2305.02499, 2023.
  69. M. Zheng, X. Su, S. You, F. Wang, C. Qian, C. Xu, and S. Albanie, “Can gpt-4 perform neural architecture search?” arXiv preprint arXiv:2304.10970, 2023.
  70. H. Wang, Y. Gao, X. Zheng, P. Zhang, H. Chen, and J. Bu, “Graph neural architecture search with gpt-4,” arXiv preprint arXiv:2310.01436, 2023.
  71. C. Yu, X. Liu, C. Tang, W. Feng, and J. Lv, “Gpt-nas: Neural architecture search with the generative pre-trained model,” arXiv preprint arXiv:2305.05351, 2023.
  72. A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving language understanding by generative pre-training,” 2018.
  73. M. U. Nasir, S. Earle, J. Togelius, S. James, and C. Cleghorn, “Llmatic: Neural architecture search via large language models and quality-diversity optimization,” arXiv preprint arXiv:2306.01102, 2023.
  74. J. K. Pugh, L. B. Soros, and K. O. Stanley, “Quality diversity: A new frontier for evolutionary computation,” Frontiers in Robotics and AI, vol. 3, p. 40, 2016.
  75. E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong, “Codegen: An open large language model for code with multi-turn program synthesis,” arXiv preprint arXiv:2203.13474, 2022.
  76. A. Chen, D. M. Dohan, and D. R. So, “Evoprompting: Language models for code-level neural architecture search,” arXiv preprint arXiv:2302.14838, 2023.
  77. G. Jawahar, M. Abdul-Mageed, L. V. Lakshmanan, and D. Ding, “Llm performance predictors are good initializers for architecture search,” arXiv preprint arXiv:2310.16712, 2023.
  78. M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.03374, 2021.
  79. Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. Dal Lago et al., “Competition-level code generation with alphacode,” Science, vol. 378, no. 6624, pp. 1092–1097, 2022.
  80. J.-B. Mouret and J. Clune, “Illuminating search spaces by mapping elites,” arXiv preprint arXiv:1504.04909, 2015.
  81. A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, “Robots that can adapt like animals,” Nature, vol. 521, no. 7553, pp. 503–507, 2015.
  82. P. Szerlip and K. O. Stanley, “Indirectly encoding running and jumping sodarace creatures for artificial life,” Artificial Life, vol. 21, no. 4, pp. 432–444, 2015.
  83. Y. Fu, Y. Zhang, Z. Yu, S. Li, Z. Ye, C. Li, C. Wan, and Y. C. Lin, “Gpt4aigchip: Towards next-generation ai accelerator design automation via large language models,” in Proceedings of the 2023 IEEE/ACM International Conference on Computer Aided Design.   IEEE, 2023, pp. 1–9.
  84. B. L. Miller, D. E. Goldberg et al., “Genetic algorithms, tournament selection, and the effects of noise,” Complex Systems, vol. 9, no. 3, pp. 193–212, 1995.
  85. F. Wu, X. Liu, and C. Xiao, “Deceptprompt: Exploiting llm-driven code generation via adversarial natural language instructions,” arXiv preprint arXiv:2312.04730, 2023.
  86. J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–73, 1992.
  87. N. Tao, A. Ventresque, and T. Saber, “Program synthesis with generative pre-trained transformers and grammar-guided genetic programming grammar,” in Proceedings of the 9th IEEE Latin American Conference on Computational Intelligence, 2023, pp. 1–7.
  88. ——, “Multi-objective grammar-guided genetic programming with code similarity measurement for program synthesis,” in Proceedings of the 2022 IEEE Congress on Evolutionary Computation.   IEEE, 2022, pp. 1–8.
  89. S. Kang and S. Yoo, “Towards objective-tailored genetic improvement through large language models,” in Proceedings of the 12th International Workshop on Genetic Improvement.   University of Melbourne, 2023.
  90. G. Fraser and A. Arcuri, “Evolutionary generation of whole test suites,” in Proceedings of the 11th International Conference on Quality Software.   IEEE, 2011, pp. 31–40.
  91. ——, “Evosuite: automatic test suite generation for object-oriented software,” in Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, 2011, pp. 416–419.
  92. D. Fried, A. Aghajanyan, J. Lin, S. Wang, E. Wallace, F. Shi, R. Zhong, S. Yih, L. Zettlemoyer, and M. Lewis, “Incoder: A generative model for code infilling and synthesis,” in Proceedings of the 11th International Conference on Learning Representations, 2022.
  93. C. Lemieux, J. P. Inala, S. K. Lahiri, and S. Sen, “Codamosa: Escaping coverage plateaus in test generation with pre-trained large language models,” in Proceedings of the 45th IEEE/ACM International Conference on Software Engineering, 2023, pp. 919–931.
  94. A. Panichella, F. M. Kifetew, and P. Tonella, “Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets,” IEEE Transactions on Software Engineering, vol. 44, no. 2, pp. 122–158, 2017.
  95. A. Aleti, I. Moser, and L. Grunske, “Analysing the fitness landscape of search-based software testing problems,” Automated Software Engineering, vol. 24, pp. 603–621, 2017.
  96. N. Albunian, G. Fraser, and D. Sudholt, “Causes and effects of fitness landscapes in unit test generation,” in Proceedings of the 2020 Genetic and Evolutionary Computation Conference, 2020, pp. 1204–1212.
  97. V. Tawosi, R. Moussa, and F. Sarro, “Agile effort estimation: Have we solved the problem yet? insights from a replication study,” IEEE Transactions on Software Engineering, vol. 49, no. 4, pp. 2677–2697, 2022.
  98. V. Tawosi, S. Alamir, and X. Liu, “Search-based optimisation of llm learning shots for story point estimation,” in Proceedings of the International Symposium on Search Based Software Engineering.   Springer, 2023, pp. 123–129.
  99. K. M. Jablonka, Q. Ai, A. Al-Feghali, S. Badhwar, J. D. Bocarsly, A. M. Bran, S. Bringuier, L. C. Brinson, K. Choudhary, D. Circi et al., “14 examples of how llms can transform materials science and chemistry: a reflection on a large language model hackathon,” Digital Discovery, vol. 2, no. 5, pp. 1233–1250, 2023.
  100. D. Weininger, “Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules,” Journal of Chemical Information and Computer Sciences, vol. 28, no. 1, pp. 31–36, 1988.
  101. S. Sudhakaran, M. González-Duque, C. Glanois, M. Freiberger, E. Najarro, and S. Risi, “Prompt-guided level generation,” in Proceedings of the Companion Conference on Genetic and Evolutionary Computation, 2023, pp. 179–182.
  102. A. J. Summerville, S. Snodgrass, M. Mateas, and S. Ontanón, “The vglc: The video game level corpus,” arXiv preprint arXiv:1606.07487, 2016.
  103. J. Lehman, K. O. Stanley et al., “Exploiting open-endedness to solve problems through the search for novelty,” in Proceedings of the 11th International Conference on the Synthesis and Simulation of Living Systems, 2008, pp. 329–336.
  104. H. Chefer, Y. Alaluf, Y. Vinker, L. Wolf, and D. Cohen-Or, “Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models,” ACM Transactions on Graphics, vol. 42, no. 4, pp. 1–10, 2023.
  105. H. Berger, A. Dakhama, Z. Ding, K. Even-Mendoza, D. Kelly, H. Menendez, R. Moussa, and F. Sarro, “Stableyolo: Optimizing image generation for large language models,” in Proceedings of the International Symposium on Search Based Software Engineering.   Springer, 2023, pp. 133–139.
  106. R. Suzuki and T. Arita, “An evolutionary model of personality traits related to cooperative behavior using a large language model,” arXiv preprint arXiv:2310.05976, 2023.
  107. T. Saha, D. Ganguly, S. Saha, and P. Mitra, “Large language models’ interpretability and trustworthiness (llmit),” 2023.
  108. S. Jain and B. C. Wallace, “Attention is not explanation,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 3543–3556.
  109. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
  110. S. Yin, C. Fu, S. Zhao, K. Li, X. Sun, T. Xu, and E. Chen, “A survey on multimodal large language models,” arXiv preprint arXiv:2306.13549, 2023.
  111. C. Cui, Y. Ma, X. Cao, W. Ye, Y. Zhou, K. Liang, J. Chen, J. Lu, Z. Yang, K.-D. Liao et al., “A survey on multimodal large language models for autonomous driving,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 958–979.
  112. C. C. Coello, “Evolutionary multi-objective optimization: a historical view of the field,” IEEE Computational Intelligence Magazine, vol. 1, no. 1, pp. 28–36, 2006.
  113. K. C. Tan, L. Feng, and M. Jiang, “Evolutionary transfer optimization-a new frontier in evolutionary computation research,” IEEE Computational Intelligence Magazine, vol. 16, no. 1, pp. 22–33, 2021.
  114. T. Wei, S. Wang, J. Zhong, D. Liu, and J. Zhang, “A review on evolutionary multitask optimization: Trends and challenges,” IEEE Transactions on Evolutionary Computation, vol. 26, no. 5, pp. 941–960, 2021.
  115. L. Zhang, Y. Xie, J. Chen, L. Feng, C. Chen, and K. Liu, “A study on multiform multi-objective evolutionary optimization,” Memetic Computing, vol. 13, pp. 307–318, 2021.
  116. P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, X. Chen, and X. Wang, “A comprehensive survey of neural architecture search: Challenges and solutions,” ACM Computing Surveys (CSUR), vol. 54, no. 4, pp. 1–34, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xingyu Wu (24 papers)
  2. Sheng-hao Wu (3 papers)
  3. Jibin Wu (42 papers)
  4. Liang Feng (59 papers)
  5. Kay Chen Tan (83 papers)
Citations (35)
X Twitter Logo Streamline Icon: https://streamlinehq.com