Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Performance-Improving Code Edits (2302.07867v5)

Published 15 Feb 2023 in cs.SE, cs.AI, cs.LG, and cs.PF

Abstract: With the decline of Moore's law, optimizing program performance has become a major focus of software research. However, high-level optimizations such as API and algorithm changes remain elusive due to the difficulty of understanding the semantics of code. Simultaneously, pretrained LLMs have demonstrated strong capabilities at solving a wide range of programming tasks. To that end, we introduce a framework for adapting LLMs to high-level program optimization. First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs, accompanied by extensive unit tests. A major challenge is the significant variability of measuring performance on commodity hardware, which can lead to spurious "improvements." To isolate and reliably evaluate the impact of program optimizations, we design an environment based on the gem5 full system simulator, the de facto simulator used in academia and industry. Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play. A combination of these techniques achieves a mean speedup of 6.86 with eight generations, higher than average optimizations from individual programmers (3.66). Using our model's fastest generations, we set a new upper limit on the fastest speedup possible for our dataset at 9.64 compared to using the fastest human submissions available (9.56).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Alexander Shypula (3 papers)
  2. Aman Madaan (30 papers)
  3. Yimeng Zeng (9 papers)
  4. Uri Alon (40 papers)
  5. Jacob Gardner (8 papers)
  6. Milad Hashemi (17 papers)
  7. Graham Neubig (342 papers)
  8. Parthasarathy Ranganathan (12 papers)
  9. Osbert Bastani (97 papers)
  10. Amir Yazdanbakhsh (38 papers)
Citations (65)
X Twitter Logo Streamline Icon: https://streamlinehq.com