Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teaching Large Language Models to Self-Debug (2304.05128v2)

Published 11 Apr 2023 in cs.CL and cs.AI
Teaching Large Language Models to Self-Debug

Abstract: LLMs have achieved impressive performance on code generation. However, for complex programming tasks, generating the correct solution in one go becomes challenging, thus some prior works have designed program repair approaches to improve code generation performance. In this work, we propose Self-Debugging, which teaches a LLM to debug its predicted program via few-shot demonstrations. In particular, we demonstrate that Self-Debugging can teach the LLM to perform rubber duck debugging; i.e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language. Self-Debugging achieves the state-of-the-art performance on several code generation benchmarks, including the Spider dataset for text-to-SQL generation, TransCoder for C++-to-Python translation, and MBPP for text-to-Python generation. On the Spider benchmark where there are no unit tests to verify the correctness of predictions, Self-Debugging with code explanation consistently improves the baseline by 2-3%, and improves the prediction accuracy on problems of the hardest level by 9%. On TransCoder and MBPP where unit tests are available, Self-Debugging improves the baseline accuracy by up to 12%. Meanwhile, by leveraging feedback messages and reusing failed predictions, Self-Debugging notably improves sample efficiency, and can match or outperform baseline models that generate more than 10x candidate programs.

Overview of ELF-DEBUGGING for LLMs

The paper "Teaching LLMs to Self-Debug" introduces a novel approach, ELF-DEBUGGING, aimed at enhancing the debugging capabilities of LLMs. Tackling the challenge of generating correct code consistently, the authors propose a method that empowers LLMs to self-debug, inspired by the human technique of rubber duck debugging. This involves the LLM explaining its code line-by-line without explicit human feedback, using execution results to identify and correct errors.

Key Results and Contributions

ELF-DEBUGGING was evaluated across several benchmarks, achieving state-of-the-art accuracy enhancements:

  • Spider Benchmark: For text-to-SQL tasks with no unit tests, ELF-DEBUGGING improved baseline accuracy by 2-3% and offered a 9% boost for the most challenging queries.
  • TransCoder and MBPP: These tasks, which involve code translation and text-to-Python generation, saw accuracy gains up to 12% with unit tests. Notably, the debugging without explanations still consistently improved performance by 2-3%.

The model demonstrated improved sample efficiency, matching or surpassing baseline models with significantly fewer predictions, highlighting the potential for reducing computational resources.

Methodological Insights

ELF-DEBUGGING capitalizes on the intrinsic code generation abilities of pre-trained LLMs:

  • Few-Shot Prompting: This technique requires minimal input demonstrations, streamlining the model's task adaptation without extensive retraining.
  • Rubber Duck Debugging Analog: By simulating a step-by-step explanation process, models can internally critique and refine generated code.

Practical and Theoretical Implications

Practically, integrating ELF-DEBUGGING could significantly enhance the robustness of code generated by LLMs in software development, reducing reliance on human oversight and potentially lowering debugging costs in production pipelines. Theoretically, this approach paves a promising path for autonomous AI systems capable of self-improvement and error detection.

Speculation on Future Developments

Considering future advancements, this research suggests refining LLMs' semantic comprehension, potentially incorporating more sophisticated diagnostic tools and enriched feedback mechanisms. Further exploration could involve hybrid approaches melding predictive models with real-time execution feedback to bolster debugging efficacy.

In conclusion, ELF-DEBUGGING represents a meaningful stride towards autonomous self-correcting AI systems, expanding the horizons of AI applications in software engineering. By evidencing tangible improvements in efficiency and accuracy, this work suggests a transformative trajectory for enhancing LLM utility in complex code generation tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinyun Chen (80 papers)
  2. Maxwell Lin (9 papers)
  3. Nathanael Schärli (8 papers)
  4. Denny Zhou (65 papers)
Citations (509)
Youtube Logo Streamline Icon: https://streamlinehq.com