Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling (2407.09887v3)

Published 13 Jul 2024 in cs.LG and math.OC

Abstract: LLMs have exhibited their problem-solving abilities in mathematical reasoning. Solving realistic optimization (OPT) problems in application scenarios requires advanced and applied mathematics ability. However, current OPT benchmarks that merely solve linear programming are far from complex realistic situations. In this work, we propose OptiBench, a benchmark for End-to-end optimization problem-solving with human-readable inputs and outputs. OptiBench contains rich optimization problems, including linear and nonlinear programming with or without tabular data, which can comprehensively evaluate LLMs' solving ability. In our benchmark, LLMs are required to call a code solver to provide precise numerical answers. Furthermore, to alleviate the data scarcity for optimization problems, and to bridge the gap between open-source LLMs on a small scale (e.g., Llama-3-8b) and closed-source LLMs (e.g., GPT-4), we further propose a data synthesis method namely ReSocratic. Unlike general data synthesis methods that proceed from questions to answers, \ReSocratic first incrementally synthesizes formatted optimization demonstration with mathematical formulations step by step and then back-translates the generated demonstrations into questions. Based on this, we synthesize the ReSocratic-29k dataset. We further conduct supervised fine-tuning with ReSocratic-29k on multiple open-source models. Experimental results show that ReSocratic-29k significantly improves the performance of open-source models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Zhicheng Yang (26 papers)
  2. Yinya Huang (22 papers)
  3. Wei Shi (116 papers)
  4. Liang Feng (59 papers)
  5. Linqi Song (93 papers)
  6. Yiwei Wang (119 papers)
  7. Xiaodan Liang (318 papers)
  8. Jing Tang (108 papers)
  9. Zhijiang Guo (55 papers)
  10. Xiongwei Han (15 papers)