Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fault-Aware Neural Code Rankers (2206.03865v2)

Published 4 Jun 2022 in cs.PL, cs.AI, and cs.SE

Abstract: LLMs have demonstrated an impressive ability to generate code for various programming tasks. In many instances, LLMs can generate a correct program for a task when given numerous trials. Consequently, a recent trend is to do large scale sampling of programs using a model and then filtering/ranking the programs based on the program execution on a small number of known unit tests to select one candidate solution. However, these approaches assume that the unit tests are given and assume the ability to safely execute the generated programs (which can do arbitrary dangerous operations such as file manipulations). Both of the above assumptions are impractical in real-world software development. In this paper, we propose CodeRanker, a neural ranker that can predict the correctness of a sampled program without executing it. Our CodeRanker is fault-aware i.e., it is trained to predict different kinds of execution information such as predicting the exact compile/runtime error type (e.g., an IndexError or a TypeError). We show that CodeRanker can significantly increase the pass@1 accuracy of various code generation models (including Codex, GPT-Neo, GPT-J) on APPS, HumanEval and MBPP datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jeevana Priya Inala (18 papers)
  2. Chenglong Wang (80 papers)
  3. Mei Yang (20 papers)
  4. Andres Codas (5 papers)
  5. Mark Encarnación (1 paper)
  6. Madanlal Musuvathi (8 papers)
  7. Jianfeng Gao (344 papers)
  8. Shuvendu K Lahiri (2 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.