Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Decision-Making in Optimization through LLM-Assisted Inference: A Neural Networks Perspective (2405.07212v1)

Published 12 May 2024 in cs.NE and cs.AI

Abstract: This paper explores the seamless integration of Generative AI (GenAI) and Evolutionary Algorithms (EAs) within the domain of large-scale multi-objective optimization. Focusing on the transformative role of LLMs, our study investigates the potential of LLM-Assisted Inference to automate and enhance decision-making processes. Specifically, we highlight its effectiveness in illuminating key decision variables in evolutionarily optimized solutions while articulating contextual trade-offs. Tailored to address the challenges inherent in inferring complex multi-objective optimization solutions at scale, our approach emphasizes the adaptive nature of LLMs, allowing them to provide nuanced explanations and align their language with diverse stakeholder expertise levels and domain preferences. Empirical studies underscore the practical applicability and impact of LLM-Assisted Inference in real-world decision-making scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (25)
  1. P. Stone, R. Brooks, E. Brynjolfsson, R. Calo, O. Etzioni, G. Hager, J. Hirschberg, S. Kalyanakrishnan, E. Kamar, S. Kraus et al., “Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence,” arXiv preprint arXiv:2211.06318, 2022.
  2. “What every ceo should know about generative ai,” https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/what-every-ceo-should-know-about-generative-ai, accessed: 2024-01-11.
  3. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  4. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  5. A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” arXiv preprint arXiv:2204.06125, vol. 1, no. 2, p. 3, 2022.
  6. C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. L. Denton, K. Ghasemipour, R. Gontijo Lopes, B. Karagol Ayan, T. Salimans et al., “Photorealistic text-to-image diffusion models with deep language understanding,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 479–36 494, 2022.
  7. Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, “A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt,” arXiv preprint arXiv:2303.04226, 2023.
  8. M. Wong, Y.-S. Ong, A. Gupta, K. K. Bali, and C. Chen, “Prompt evolution for generative ai: A classifier-guided approach,” in 2023 IEEE Conference on Artificial Intelligence (CAI).   IEEE, 2023, pp. 226–229.
  9. A. Telikani, A. Tahmassebi, W. Banzhaf, and A. H. Gandomi, “Evolutionary machine learning: A survey,” ACM Computing Surveys (CSUR), vol. 54, no. 8, pp. 1–35, 2021.
  10. P. Linardatos, V. Papastefanopoulos, and S. Kotsiantis, “Explainable ai: A review of machine learning interpretability methods,” Entropy, vol. 23, no. 1, p. 18, 2020.
  11. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  12. S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg et al., “Sparks of artificial general intelligence: Early experiments with gpt-4,” arXiv preprint arXiv:2303.12712, 2023.
  13. Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. Zhu, B. Li, L. Jiang, X. Zhang, and C. Wang, “Autogen: Enabling next-gen llm applications via multi-agent conversation framework,” arXiv preprint arXiv:2308.08155, 2023.
  14. R. Yang, T. F. Tan, W. Lu, A. J. Thirunavukarasu, D. S. W. Ting, and N. Liu, “Large language models in health care: Development, applications, and challenges,” Health Care Science, vol. 2, no. 4, pp. 255–263, 2023.
  15. A. Kuppa, N. Rasumov-Rahe, and M. Voses, “Chain of reference prompting helps llm to think like a lawyer,” in Generative AI+ Law Workshop, 2023.
  16. E. Latif, L. Fang, P. Ma, and X. Zhai, “Knowledge distillation of llm for education,” arXiv preprint arXiv:2312.15842, 2023.
  17. K. K. Bali, A. Gupta, Y.-S. Ong, and P. S. Tan, “Cognizant multitasking in multiobjective multifactorial evolution: Mo-mfea-ii,” IEEE Transactions on Cybernetics, 2020.
  18. C. C. Coello, “Evolutionary multi-objective optimization: a historical view of the field,” IEEE computational intelligence magazine, vol. 1, no. 1, pp. 28–36, 2006.
  19. A. Heidari, J. Qing, S. Rojas Gonzalez, J. Branke, T. Dhaene, and I. Couckuyt, “Finding knees in bayesian multi-objective optimization,” in International Conference on Parallel Problem Solving from Nature.   Springer, 2022, pp. 104–117.
  20. H. Li and Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, moea/d and nsga-ii,” IEEE transactions on evolutionary computation, vol. 13, no. 2, pp. 284–302, 2008.
  21. S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
  22. M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.
  23. D. Adshead, S. Thacker, L. I. Fuldauer, and J. W. Hall, “Delivering on the sustainable development goals through long-term infrastructure planning,” Global Environmental Change, vol. 59, p. 101975, 2019.
  24. Z. Wang, H.-L. Zhen, J. Deng, Q. Zhang, X. Li, M. Yuan, and J. Zeng, “Multiobjective optimization-aided decision-making system for large-scale manufacturing planning,” IEEE Transactions on Cybernetics, vol. 52, no. 8, pp. 8326–8339, 2022.
  25. E. Raei, M. Reza Alizadeh, M. Reza Nikoo, and J. Adamowski, “Multi-objective decision-making for green infrastructure planning (lid-bmps) in urban storm water management under uncertainty,” Journal of Hydrology, vol. 579, p. 124091, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gaurav Singh (49 papers)
  2. Kavitesh Kumar Bali (2 papers)
Citations (2)