Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation (2308.15363v4)

Published 29 Aug 2023 in cs.DB, cs.CL, and cs.LG

Abstract: LLMs have emerged as a new paradigm for Text-to-SQL task. However, the absence of a systematical benchmark inhibits the development of designing effective, efficient and economic LLM-based Text-to-SQL solutions. To address this challenge, in this paper, we first conduct a systematical and extensive comparison over existing prompt engineering methods, including question representation, example selection and example organization, and with these experimental results, we elaborate their pros and cons. Based on these findings, we propose a new integrated solution, named DAIL-SQL, which refreshes the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To explore the potential of open-source LLM, we investigate them in various scenarios, and further enhance their performance with supervised fine-tuning. Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning. Additionally, towards an efficient and economic LLM-based Text-to-SQL solution, we emphasize the token efficiency in prompt engineering and compare the prior studies under this metric. We hope that our work provides a deeper understanding of Text-to-SQL with LLMs, and inspires further investigations and broad applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dawei Gao (27 papers)
  2. Haibin Wang (26 papers)
  3. Yaliang Li (117 papers)
  4. Xiuyu Sun (25 papers)
  5. Yichen Qian (10 papers)
  6. Bolin Ding (112 papers)
  7. Jingren Zhou (198 papers)
Citations (148)
Github Logo Streamline Icon: https://streamlinehq.com