Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Training Language Models on Synthetic Edit Sequences Improves Code Synthesis (2410.02749v2)

Published 3 Oct 2024 in cs.LG and cs.CL
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis

Abstract: Software engineers mainly write code by editing existing programs. In contrast, LLMs autoregressively synthesize programs in a single pass. One explanation for this is the scarcity of open-sourced edit data. While high-quality instruction data for code synthesis is already scarce, high-quality edit data is even scarcer. To fill this gap, we develop a synthetic data generation algorithm called LintSeq. This algorithm refactors existing code into a sequence of code edits by using a linter to procedurally sample across the error-free insertions that can be used to sequentially write programs. It outputs edit sequences as text strings consisting of consecutive program diffs. To test LintSeq, we use it to refactor a dataset of instruction + program pairs into instruction + program-diff-sequence tuples. Then, we instruction finetune a series of smaller LLMs ranging from 2.6B to 14B parameters on both the re-factored and original versions of this dataset, comparing zero-shot performance on code synthesis benchmarks. We show that during repeated sampling, edit sequence finetuned models produce more diverse programs than baselines. This results in better inference-time scaling for benchmark coverage as a function of samples, i.e. the fraction of problems "pass@k" solved by any attempt given "k" tries. For example, on HumanEval pass@50, small LLMs finetuned on synthetic edit sequences are competitive with GPT-4 and outperform models finetuned on the baseline dataset by +20% (+/-3%) in absolute score. Finally, we also pretrain our own tiny LMs for code understanding. We show that finetuning tiny models on synthetic code edits results in state-of-the-art code synthesis for the on-device model class. Our 150M parameter edit sequence LM matches or outperforms code models with twice as many parameters, both with and without repeated sampling, including Codex and AlphaCode.

Training LLMs on Synthetic Edit Sequences Improves Code Synthesis

This paper addresses a salient issue within the domain of LLM-based code synthesis. The authors propose a novel approach for enhancing code synthesis by training models on synthetic edit sequences generated via an algorithm named LintSeq. Current LLMs synthesize code through a single-step autoregressive generation, a methodology that can be both computationally expensive and insufficiently diverse. LintSeq aims to repurpose this task into a sequential edit problem, ostensibly improving the zero-shot performance and diversity of generated code.

Methodology

LintSeq operates through two main phases: a backward sampling phase and a forward edit computation phase. During backward sampling, a source code file is transformed through a series of error-free program states using a static program verifier or "linter." The subsequent forward phase computes the differences between consecutive program states using the Unix diff operator, generating a sequence of code edits. This parameter-free mechanism offers an efficient way to produce edit sequences for training purposes.

The paper's approach hypothesizes that training models on these sequences could provide a better trade-off between generation quality and computational efficiency than traditional models synthesizing full programs. The synthetic data obtained with LintSeq enables models to predict possible program modifications incrementally, potentially leading to more accurate and varied code synthesis.

Experimental Results

The authors conducted multiple experiments across models with varying scales, from 150M to 14B parameters. Key findings include:

  1. Across all models, those fine-tuned on synthetic edit sequences demonstrated superior performance in quality and diversity of the synthesized code compared to those fine-tuned on full programs.
  2. LLMs trained on edit sequences, when repeatedly sampled, showed increased "pass@k" performance, a critical metric indicating the percentage of problems solved within k attempts.
  3. Notably, smaller models fined-tuned with LintSeq data exhibited state-of-the-art performance. For instance, a 150M parameter model matched or exceeded performance of some models with twice as many parameters, including Codex and AlphaCode.
  4. When comparing inference costs, LintSeq-enhanced models offered coverage competitive with larger, state-of-the-art models like GPT-4, but with reduced cumulative inference-time FLOPs.
  5. Ablation studies revealed that removing the linter from the backward sampling phase negatively impacted the quality and diversity of model outputs, underscoring the importance of error-free edit sequences.

Implications and Future Work

The implications of this research are both practical and theoretical. Practically, the introduction of LintSeq provides an efficient methodology for boosting the performance of LLMs on code synthesis tasks, enabling smaller models to achieve competitive results with significantly reduced computational costs. Theoretically, the paper suggests that re-parameterizing LLM tasks using edit sequences can be a potent mechanism for enhancing model expressivity and output diversity.

Looking forward, the application of LintSeq might be extended beyond code synthesis to tasks like mathematical reasoning and formal theorem proving, where similar sequential problem structures exist. Additionally, future research could explore more sophisticated inference-time search strategies in the "edit space," potentially further enhancing the performance of LLMs.

In conclusion, this paper posits a significant step toward refining LLM capabilities in code synthesis through data-level re-parameterization, offering a promising avenue for future advancements in AI and machine learning fields.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ulyana Piterbarg (5 papers)
  2. Lerrel Pinto (81 papers)
  3. Rob Fergus (67 papers)
Citations (1)
Reddit Logo Streamline Icon: https://streamlinehq.com