Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL (extended) (2306.00739v4)

Published 26 May 2023 in cs.CL, cs.AI, and cs.DB

Abstract: Text-to-SQL, the process of translating natural language into Structured Query Language (SQL), represents a transformative application of LLMs, potentially revolutionizing how humans interact with data. This paper introduces the SQL-PaLM framework, a comprehensive solution for understanding and enhancing Text-to-SQL using LLMs, using in the learning regimes of few-shot prompting and instruction fine-tuning. With few-shot prompting, we explore the effectiveness of consistency decoding with execution-based error filtering. With instruction fine-tuning, we delve deep in understanding the critical paradigms that influence the performance of tuned LLMs. In particular, we investigate how performance can be improved through expanded training data coverage and diversity, synthetic data augmentation, and integrating query-specific database content. We propose a test-time selection method to further refine accuracy by integrating SQL outputs from multiple paradigms with execution feedback as guidance. Additionally, we tackle the practical challenge of navigating intricate databases with a significant number of tables and columns, proposing efficient techniques for accurately selecting relevant database elements to enhance Text-to-SQL performance. Our holistic approach yields substantial advancements in Text-to-SQL, as demonstrated on two key public benchmarks, Spider and BIRD. Through comprehensive ablations and error analyses, we shed light on the strengths and weaknesses of our framework, offering valuable insights into Text-to-SQL's future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Ruoxi Sun (58 papers)
  2. Hootan Nakhost (10 papers)
  3. Hanjun Dai (63 papers)
  4. Rajarishi Sinha (3 papers)
  5. Pengcheng Yin (42 papers)
  6. Tomas Pfister (89 papers)
  7. Alex Muzio (1 paper)
  8. Lesly Miculicich (15 papers)
  9. Satya Gundabathula (1 paper)
  10. Zifeng Wang (78 papers)
  11. Sercan Ö. Arik (8 papers)
Citations (14)