Training LLMs on Synthetic Edit Sequences Improves Code Synthesis
This paper addresses a salient issue within the domain of LLM-based code synthesis. The authors propose a novel approach for enhancing code synthesis by training models on synthetic edit sequences generated via an algorithm named LintSeq. Current LLMs synthesize code through a single-step autoregressive generation, a methodology that can be both computationally expensive and insufficiently diverse. LintSeq aims to repurpose this task into a sequential edit problem, ostensibly improving the zero-shot performance and diversity of generated code.
Methodology
LintSeq operates through two main phases: a backward sampling phase and a forward edit computation phase. During backward sampling, a source code file is transformed through a series of error-free program states using a static program verifier or "linter." The subsequent forward phase computes the differences between consecutive program states using the Unix diff
operator, generating a sequence of code edits. This parameter-free mechanism offers an efficient way to produce edit sequences for training purposes.
The paper's approach hypothesizes that training models on these sequences could provide a better trade-off between generation quality and computational efficiency than traditional models synthesizing full programs. The synthetic data obtained with LintSeq enables models to predict possible program modifications incrementally, potentially leading to more accurate and varied code synthesis.
Experimental Results
The authors conducted multiple experiments across models with varying scales, from 150M to 14B parameters. Key findings include:
- Across all models, those fine-tuned on synthetic edit sequences demonstrated superior performance in quality and diversity of the synthesized code compared to those fine-tuned on full programs.
- LLMs trained on edit sequences, when repeatedly sampled, showed increased "pass@k" performance, a critical metric indicating the percentage of problems solved within k attempts.
- Notably, smaller models fined-tuned with LintSeq data exhibited state-of-the-art performance. For instance, a 150M parameter model matched or exceeded performance of some models with twice as many parameters, including Codex and AlphaCode.
- When comparing inference costs, LintSeq-enhanced models offered coverage competitive with larger, state-of-the-art models like GPT-4, but with reduced cumulative inference-time FLOPs.
- Ablation studies revealed that removing the linter from the backward sampling phase negatively impacted the quality and diversity of model outputs, underscoring the importance of error-free edit sequences.
Implications and Future Work
The implications of this research are both practical and theoretical. Practically, the introduction of LintSeq provides an efficient methodology for boosting the performance of LLMs on code synthesis tasks, enabling smaller models to achieve competitive results with significantly reduced computational costs. Theoretically, the paper suggests that re-parameterizing LLM tasks using edit sequences can be a potent mechanism for enhancing model expressivity and output diversity.
Looking forward, the application of LintSeq might be extended beyond code synthesis to tasks like mathematical reasoning and formal theorem proving, where similar sequential problem structures exist. Additionally, future research could explore more sophisticated inference-time search strategies in the "edit space," potentially further enhancing the performance of LLMs.
In conclusion, this paper posits a significant step toward refining LLM capabilities in code synthesis through data-level re-parameterization, offering a promising avenue for future advancements in AI and machine learning fields.