Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering (2304.07840v2)

Published 16 Apr 2023 in cs.LG and cs.SE

Abstract: Sequence-to-sequence models have been used to transform erroneous programs into correct ones when trained with a large enough dataset. Some recent studies also demonstrated strong empirical evidence that code review could improve the program repair further. LLMs, trained with Natural Language (NL) and Programming Language (PL), can contain inherent knowledge of both. In this study, we investigate if this inherent knowledge of PL and NL can be utilized to improve automated program repair. We applied PLBART and CodeT5, two state-of-the-art LLMs that are pre-trained with both PL and NL, on two such natural language-based program repair datasets and found that the pre-trained LLMs fine-tuned with datasets containing both code review and subsequent code changes notably outperformed each of the previous models. With the advent of code generative models like Codex and GPT-3.5-Turbo, we also performed zero-shot and few-shots learning-based prompt engineering to assess their performance on these datasets. However, the practical application of using LLMs in the context of automated program repair is still a long way off based on our manual analysis of the generated repaired codes by the learning models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Rishov Paul (1 paper)
  2. Md. Mohib Hossain (1 paper)
  3. Mohammed Latif Siddiq (7 papers)
  4. Masum Hasan (14 papers)
  5. Anindya Iqbal (24 papers)
  6. Joanna C. S. Santos (13 papers)
Citations (9)
Youtube Logo Streamline Icon: https://streamlinehq.com