Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation (2305.01210v3)

Published 2 May 2023 in cs.SE, cs.CL, and cs.LG
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation

Abstract: Program synthesis has been long studied with recent approaches focused on directly using the power of LLMs to generate code. Programming benchmarks, with curated synthesis problems and test-cases, are used to measure the performance of various LLMs on code synthesis. However, these test-cases can be limited in both quantity and quality for fully assessing the functional correctness of the generated code. Such limitation in the existing benchmarks begs the following question: In the era of LLMs, is the code generated really correct? To answer this, we propose EvalPlus -- a code synthesis evaluation framework to rigorously benchmark the functional correctness of LLM-synthesized code. EvalPlus augments a given evaluation dataset with large amounts of test-cases newly produced by an automatic test input generator, powered by both LLM- and mutation-based strategies. While EvalPlus is general, we extend the test-cases of the popular HumanEval benchmark by 80x to build HumanEval+. Our extensive evaluation across 26 popular LLMs (e.g., GPT-4 and ChatGPT) demonstrates that HumanEval+ is able to catch significant amounts of previously undetected wrong code synthesized by LLMs, reducing the pass@k by up-to 19.3-28.9%. We also surprisingly found that test insufficiency can lead to mis-ranking. For example, both WizardCoder-CodeLlama and Phind-CodeLlama now outperform ChatGPT on HumanEval+, while none of them could on HumanEval. Our work not only indicates that prior popular code synthesis evaluation results do not accurately reflect the true performance of LLMs for code synthesis, but also opens up a new direction to improve such programming benchmarks through automated testing. We have open-sourced our tools, enhanced datasets as well as all LLM-generated code at https://github.com/evalplus/evalplus to facilitate and accelerate future LLM-for-code research.

Rigorous Evaluation of LLMs for Code Generation: An Analysis of EvalPlus

The evaluation of LLMs in code synthesis has become increasingly pertinent as these models demonstrate substantial prowess in generating code based on natural language descriptions. The paper "Is Your Code Generated by ChatGPT Really Correct?" introduces EvalPlus, a novel framework designed to rigorously benchmark the functional correctness of code synthesized by LLMs. This summary presents an expert review of the methodologies, findings, and implications of the research embodied in EvalPlus.

Methodological Enhancements

EvalPlus addresses key limitations in existing code synthesis benchmarks, such as HumanEval, by expanding their test-cases using a two-pronged strategy:

  1. Automatic Test Input Generation: EvalPlus integrates LLM- and mutation-based strategies to generate diverse and comprehensive test cases. It initially employs ChatGPT to generate a set of high-quality seed inputs, which are then extensively diversified using type-aware mutation techniques. This approach enhances the depth of testing by exploring edge cases and difficult scenarios that the original benchmarks might miss.
  2. Test-Suite Reduction: As executing large test suites can be computationally expensive, EvalPlus applies test-suite reduction strategies that maintain the effectiveness of testing while minimizing the number of test cases. The reduction is achieved using metrics such as code coverage, mutant killings, and empirical LLM sample killings to ensure the retained tests are both efficient and effective.

Empirical Evaluation

The paper presents a comprehensive evaluation across 26 popular LLMs, including prominent ones like GPT-4, ChatGPT, and various CodeLlama models. The findings indicate significant discrepancies in pass rates between HumanEval and the expanded #1HumanEval datasets, highlighting the inadequacy of existing benchmarks to fully assess the accuracy of LLM-synthesized code. Notably, the pass rates on #1HumanEval demonstrate reductions up to 28.9% compared to the base HumanEval, underscoring previously undetected errors in LLM-generated code.

Implications and Future Directions

EvalPlus not only reveals the insufficiencies in current benchmarks but also suggests a path forward for more accurate evaluation of LLMs in code synthesis. The framework opens avenues for enhancing existing datasets with automated testing and indicates potential improvements in LLM evaluation accuracy.

Future work could further refine EvalPlus by integrating formal verification methods, increasing its applicability across diverse programming tasks, and potentially incorporating it into AI programming tools to identify code flaws proactively. As the field of code generation continues to evolve, EvalPlus sets a robust standard for evaluating the effectiveness and correctness of synthesized code, providing a foundation for continued advancements in AI-driven programming solutions.

In summary, this paper provides a comprehensive mechanistic and empirical foundation for more precise evaluation of LLM-generated code, setting a precedent for future developments in program synthesis and evaluation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jiawei Liu (156 papers)
  2. Chunqiu Steven Xia (13 papers)
  3. Yuyao Wang (6 papers)
  4. Lingming Zhang (48 papers)
Citations (530)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com