Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Code Repair with LLMs gives an Exploration-Exploitation Tradeoff (2405.17503v3)

Published 26 May 2024 in cs.SE, cs.AI, cs.CL, and cs.PL

Abstract: Iteratively improving and repairing source code with LLMs, known as refinement, has emerged as a popular way of generating programs that would be too complex to construct in one shot. Given a bank of test cases, together with a candidate program, an LLM can improve that program by being prompted with failed test cases. But it remains an open question how to best iteratively refine code, with prior work employing simple greedy or breadth-first strategies. We show here that refinement exposes an explore-exploit tradeoff: exploit by refining the program that passes the most test cases, or explore by refining a lesser considered program. We frame this as an arm-acquiring bandit problem, which we solve with Thompson Sampling. The resulting LLM-based program synthesis algorithm is broadly applicable: Across loop invariant synthesis, visual reasoning puzzles, and competition programming problems, we find that our new method can solve more problems using fewer LLM calls.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hao Tang (378 papers)
  2. Keya Hu (2 papers)
  3. Jin Peng Zhou (28 papers)
  4. Sicheng Zhong (5 papers)
  5. Wei-Long Zheng (14 papers)
  6. Xujie Si (36 papers)
  7. Kevin Ellis (31 papers)
Citations (7)