Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repair Is Nearly Generation: Multilingual Program Repair with LLMs (2208.11640v3)

Published 24 Aug 2022 in cs.SE, cs.AI, and cs.PL

Abstract: Most programmers make mistakes when writing code. Some of these mistakes are small and require few edits to the original program -- a class of errors recently termed last mile mistakes. These errors break the flow for experienced developers and can stump novice programmers. Existing automated repair techniques targeting this class of errors are language-specific and do not easily carry over to new languages. Transferring symbolic approaches requires substantial engineering and neural approaches require data and retraining. We introduce RING, a multilingual repair engine powered by a LLM trained on code (LLMC) such as Codex. Such a multilingual engine enables a flipped model for programming assistance, one where the programmer writes code and the AI assistance suggests fixes, compared to traditional code suggestion technology. Taking inspiration from the way programmers manually fix bugs, we show that a prompt-based strategy that conceptualizes repair as localization, transformation, and candidate ranking, can successfully repair programs in multiple languages with minimal effort. We present the first results for such a multilingual repair engine by evaluating on 6 different languages and comparing performance to language-specific repair engines. We show that RING can outperform language-specific repair engines for three of these languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Harshit Joshi (7 papers)
  2. José Cambronero (22 papers)
  3. Sumit Gulwani (55 papers)
  4. Vu Le (26 papers)
  5. Ivan Radicek (6 papers)
  6. Gust Verbruggen (15 papers)
Citations (108)

Summary

We haven't generated a summary for this paper yet.