Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grounding Data Science Code Generation with Input-Output Specifications (2402.08073v2)

Published 12 Feb 2024 in cs.LG, cs.PL, and cs.SE

Abstract: LLMs have recently demonstrated a remarkable ability to generate code from natural language (NL) prompts. However, in the real world, NL is often too ambiguous to capture the true intent behind programming problems, requiring additional input-output (I/O) specifications. Unfortunately, LLMs can have difficulty aligning their outputs with both the NL prompt and the I/O specification. In this paper, we give a way to mitigate this issue in the context of data science programming, where tasks require explicit I/O specifications for clarity. Specifically, we propose GIFT4Code, a novel approach for the instruction fine-tuning of LLMs with respect to I/O specifications. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program I/O specifications, is provided to the LLM to facilitate instruction fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. The results demonstrate a significant improvement in the LLM's ability to generate code that is not only executable but also accurately aligned with user specifications, substantially improving the quality of code generation for complex data science tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yeming Wen (14 papers)
  2. Pengcheng Yin (42 papers)
  3. Kensen Shi (15 papers)
  4. Henryk Michalewski (42 papers)
  5. Swarat Chaudhuri (61 papers)
  6. Alex Polozov (5 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com