Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Words to Code: Harnessing Data for Program Synthesis from Natural Language (2305.01598v2)

Published 2 May 2023 in cs.DB, cs.AI, and cs.HC

Abstract: Creating programs to correctly manipulate data is a difficult task, as the underlying programming languages and APIs can be challenging to learn for many users who are not skilled programmers. LLMs demonstrate remarkable potential for generating code from natural language, but in the data manipulation domain, apart from the natural language (NL) description of the intended task, we also have the dataset on which the task is to be performed, or the "data context". Existing approaches have utilized data context in a limited way by simply adding relevant information from the input data into the prompts sent to the LLM. In this work, we utilize the available input data to execute the candidate programs generated by the LLMs and gather their outputs. We introduce semantic reranking, a technique to rerank the programs generated by LLMs based on three signals coming the program outputs: (a) semantic filtering and well-formedness based score tuning: do programs even generate well-formed outputs, (b) semantic interleaving: how do the outputs from different candidates compare to each other, and (c) output-based score tuning: how do the outputs compare to outputs predicted for the same task. We provide theoretical justification for semantic interleaving. We also introduce temperature mixing, where we combine samples generated by LLMs using both high and low temperatures. We extensively evaluate our approach in three domains, namely databases (SQL), data science (Pandas) and business intelligence (Excel's Power Query M) on a variety of new and existing benchmarks. We observe substantial gains across domains, with improvements of up to 45% in top-1 accuracy and 34% in top-3 accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Anirudh Khatry (7 papers)
  2. Joyce Cahoon (7 papers)
  3. Jordan Henkel (9 papers)
  4. Shaleen Deep (19 papers)
  5. Venkatesh Emani (1 paper)
  6. Avrilia Floratou (10 papers)
  7. Sumit Gulwani (55 papers)
  8. Vu Le (26 papers)
  9. Mohammad Raza (9 papers)
  10. Sherry Shi (4 papers)
  11. Mukul Singh (13 papers)
  12. Ashish Tiwari (44 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.