Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ORLM: A Customizable Framework in Training Large Models for Automated Optimization Modeling (2405.17743v3)

Published 28 May 2024 in cs.CL, cs.AI, cs.CE, and cs.LG

Abstract: Optimization modeling and solving play a critical role in the application of Operations Research (OR) tools to address real-world problems, yet they pose challenges and require extensive expertise from OR experts. With the advent of LLMs, new opportunities have emerged to streamline and automate these tasks. However, current research predominantly relies on closed-source LLMs such as GPT-4, along with extensive prompt engineering techniques. This reliance stems from the scarcity of high-quality training datasets for optimization modeling, resulting in elevated costs, prolonged processing times, and privacy concerns. To address these challenges, our work is the first to propose a viable path for training open-source LLMs that are capable of optimization modeling as well as developing and executing solver codes, eventually leading to a superior ability for automating optimization modeling and solving. Particularly, we introduce a semi-automated data synthesis framework designed for optimization modeling issues, named OR-Instruct. This framework merges the training data requirements of large models with the unique characteristics of optimization modeling problems, and allows for customizable enhancements tailored to specific scenarios or modeling types. To evaluate the performance of our proposed framework, we present the IndustryOR benchmark, the inaugural industrial standard for evaluating LLMs in solving practical OR problems. Utilizing data synthesized through OR-Instruct, we train various open-source LLMs with a capacity of 7 billion parameters (dubbed ORLMs). The resulting model demonstrates significantly enhanced optimization modeling capabilities, achieving state-of-the-art performance across the NL4OPT, MAMO, and IndustryOR benchmarks. Our code and data are available at \url{https://github.com/Cardinal-Operations/ORLM}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Optimus: Scalable optimization modeling with (mi) lp solvers and large language models. arXiv preprint arXiv:2402.10172, 2024.
  2. AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
  3. A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications. Journal of Big Data, 10(1):46, 2023.
  4. Mathematical modelling. Gulf Professional Publishing, 1995.
  5. Cardinal Optimizer (COPT) user guide. https://guide.coap.online/copt/en-doc, 2022.
  6. Linear programming word problems formulation using ensemblecrf ner labeler and t5 text generator with data augmentations. arXiv preprint arXiv:2212.14657, 2022.
  7. Mamo: a mathematical modeling benchmark with solvers. arXiv preprint arXiv:2405.13144, 2024.
  8. Mistral 7b. ArXiv preprint, abs/2310.06825, 2023. URL https://arxiv.org/abs/2310.06825.
  9. Tagged input and decode all-at-once strategy. https://github.com/MLPgroup/nl4opt-generation, 2022.
  10. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
  11. Large language models for supply chain optimization. arXiv preprint arXiv:2307.03875, 2023a.
  12. Synthesizing mixed-integer linear programming models from natural language descriptions. arXiv preprint arXiv:2311.15271, 2023b.
  13. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
  14. A novel approach for auto-formulation of optimization problems. arXiv preprint arXiv:2302.04643, 2023.
  15. Synthesis of mathematical programs from natural language specifications. arXiv preprint arXiv:2304.03287, 2023.
  16. Augmenting operations research with auto-formulation of optimization models from problem descriptions. In Yunyao Li and Angeliki Lazaridou, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: EMNLP 2022 - Industry Track, pages 29–62, Abu Dhabi, UAE, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-industry.4. URL https://doi.org/10.18653/v1/2022.emnlp-industry.4.
  17. Nl4opt competition: Formulating optimization problems based on their natural language descriptions. In NeurIPS 2022 Competition Track, pages 189–203. PMLR, 2023.
  18. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024.
  19. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
  20. Ajay Singh. An overview of the optimization modelling applications. Journal of Hydrology, 466:167–182, 2012.
  21. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
  22. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560, 2022.
  23. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751, 2023.
  24. Approaches to sensitivity analysis in linear programming. Annals of Operations Research, 27(1):3–38, 1990.
  25. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022.
  26. Chain-of-experts: When llms meet complex operations research problems. In The Twelfth International Conference on Learning Representations, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhengyang Tang (13 papers)
  2. Chenyu Huang (18 papers)
  3. Xin Zheng (57 papers)
  4. Shixi Hu (1 paper)
  5. Zizhuo Wang (24 papers)
  6. Dongdong Ge (34 papers)
  7. Benyou Wang (109 papers)
  8. Ruoqing Jiang (2 papers)
Citations (4)