Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability (2303.13547v1)

Published 12 Mar 2023 in cs.CL and cs.AI

Abstract: This paper presents the first comprehensive analysis of ChatGPT's Text-to-SQL ability. Given the recent emergence of large-scale conversational LLM ChatGPT and its impressive capabilities in both conversational abilities and code generation, we sought to evaluate its Text-to-SQL performance. We conducted experiments on 12 benchmark datasets with different languages, settings, or scenarios, and the results demonstrate that ChatGPT has strong text-to-SQL abilities. Although there is still a gap from the current state-of-the-art (SOTA) model performance, considering that the experiment was conducted in a zero-shot scenario, ChatGPT's performance is still impressive. Notably, in the ADVETA (RPL) scenario, the zero-shot ChatGPT even outperforms the SOTA model that requires fine-tuning on the Spider dataset by 4.1\%, demonstrating its potential for use in practical applications. To support further research in related fields, we have made the data generated by ChatGPT publicly available at https://github.com/THU-BPM/chatgpt-sql.

Comprehensive Evaluation of ChatGPT's Zero-Shot Text-to-SQL Capability

The paper "A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability" presents an in-depth assessment of ChatGPT’s performance on the Text-to-SQL task, a crucial area in the domain of semantic parsing. The research evaluates ChatGPT without any fine-tuning on task-specific training data, providing insights into its zero-shot capacity for generating SQL queries from natural language input. The paper encompasses experiments on 12 benchmark datasets across diverse languages, settings, and scenarios.

Key Findings

The evaluation highlights several critical findings about ChatGPT's Text-to-SQL capabilities:

  1. Performance Metrics: The performance of ChatGPT, measured against the state-of-the-art (SOTA) models, shows a gap of 14% in execution accuracy when evaluated on the Spider dataset, a comprehensive Text-to-SQL benchmark. Despite this gap, ChatGPT's performance in a zero-shot context—absence of target-specific training—is impressive.
  2. Scenario-Specific Performance: Notably, in the ADVETA (RPL) setting, where database schema elements are altered, ChatGPT outperforms the fine-tuned SOTA models by 4.1%. This indicates a significant robustness in scenarios involving adversarial modifications.
  3. Robustness: The paper finds that ChatGPT maintains high robustness across various benchmark scenarios. This robustness is evident with a smaller performance gap (7.8%) compared to fine-tuned models on some robustness settings of the Spider suite.
  4. Multilingual Capability: The performance on Chinese Text-to-SQL datasets like CSpider and DuSQL indicates that ChatGPT's cross-lingual proficiency requires further enhancement. There is a noticeable decline in execution accuracy, especially when both schema and queries are in Chinese, signaling additional challenges in language transfer.
  5. Multi-turn Interactions: In multi-turn Text-to-SQL contexts, observed on datasets such as SParC and CoSQL, ChatGPT demonstrates competitive performance. Its ability to handle multi-turn interactions leverages its contextual modeling proficiency, although there remains a gap compared to focused models trained on multi-turn scenarios.

Implications and Future Work

The implications of this research are substantial for both theoretical and practical aspects of AI and NLP:

  • Progress in Zero-shot Learning: The paper exemplifies the progress of zero-shot learning methodologies, particularly in the code generation and semantic parsing domains. This progress underscores the increasing potential of deploying LLMs without domain-specific training, thereby reducing data annotation efforts and enhancing adaptability across diverse applications.
  • Enhancing Robustness: By highlighting ChatGPT's impressive performance in the ADVETA scenario, this research provides a roadmap for future works to improve model robustness further by focusing on adversarial training and knowledge incorporation techniques.
  • Incorporating Contextual Learning: The gap in ChatGPT's performance on multi-turn interactions opens up pathways for exploring and refining its conversational context integration capabilities. Future models might incorporate more sophisticated contextual learning frameworks to elevate their efficacy in interactive settings.
  • Expanding Cross-Lingual Capabilities: The challenges noted in multi-linguistic Text-to-SQL tasks invite further research into enhancing cross-lingual understanding and synthesis within LLMs. This can be achieved through enriched multilingual pretraining datasets and advanced transfer learning techniques.

It is anticipated that future explorations will design better prompts and engage ChatGPT in iterative dialogue processes to refine the model outputs towards executable SQL queries. Such endeavors will further augment the practical utility of LLMs in real-world database interaction tasks, driving advancements in natural language interfaces to databases.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Aiwei Liu (42 papers)
  2. Xuming Hu (120 papers)
  3. Lijie Wen (58 papers)
  4. Philip S. Yu (592 papers)
Citations (131)
Github Logo Streamline Icon: https://streamlinehq.com