OmniSQL: Synthesizing High-quality Text-to-SQL Data at Scale (2503.02240v1)
Abstract: Text-to-SQL, the task of translating natural language questions into SQL queries, plays a crucial role in enabling non-experts to interact with databases. While recent advancements in LLMs have significantly enhanced text-to-SQL performance, existing approaches face notable limitations in real-world text-to-SQL applications. Prompting-based methods often depend on closed-source LLMs, which are expensive, raise privacy concerns, and lack customization. Fine-tuning-based methods, on the other hand, suffer from poor generalizability due to the limited coverage of publicly available training data. To overcome these challenges, we propose a novel and scalable text-to-SQL data synthesis framework for automatically synthesizing large-scale, high-quality, and diverse datasets without extensive human intervention. Using this framework, we introduce SynSQL-2.5M, the first million-scale text-to-SQL dataset, containing 2.5 million samples spanning over 16,000 synthetic databases. Each sample includes a database, SQL query, natural language question, and chain-of-thought (CoT) solution. Leveraging SynSQL-2.5M, we develop OmniSQL, a powerful open-source text-to-SQL model available in three sizes: 7B, 14B, and 32B. Extensive evaluations across nine datasets demonstrate that OmniSQL achieves state-of-the-art performance, matching or surpassing leading closed-source and open-source LLMs, including GPT-4o and DeepSeek-V3, despite its smaller size. We release all code, datasets, and models to support further research.
- Haoyang Li (95 papers)
- Shang Wu (22 papers)
- Xiaokang Zhang (42 papers)
- Xinmei Huang (6 papers)
- Jing Zhang (731 papers)
- Fuxin Jiang (6 papers)
- Shuai Wang (466 papers)
- Tieying Zhang (19 papers)
- Jianjun Chen (52 papers)
- Rui Shi (76 papers)
- Hong Chen (230 papers)
- Cuiping Li (42 papers)