Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code (2302.05527v2)

Published 10 Feb 2023 in cs.SE, cs.LG, and cs.PL

Abstract: Since the rise of neural natural-language-to-code models (NL->Code) that can generate long expressions and statements rather than a single next-token, one of the major problems has been reliably evaluating their generated output. In this paper, we propose CodeBERTScore: an evaluation metric for code generation, which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the generated tokens as in BERTScore, CodeBERTScore also encodes the natural language input preceding the generated code, thus modeling the consistency between the generated code and its given natural language context as well. We perform an extensive evaluation of CodeBERTScore across four programming languages. We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics. That is, generated code that receives a higher score by CodeBERTScore is more likely to be preferred by humans, as well as to function correctly when executed. We release five language-specific pretrained models to use with our publicly available code. Our language-specific models have been downloaded more than 1,000,000 times from the Huggingface Hub. Our code and data are available at https://github.com/neulab/code-bert-score

An Analysis of CodeBERTScore: Evaluating Code Generation with Pretrained Models

The paper "CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code" addresses a significant challenge in the domain of natural-language-to-code (NL→Code) modeling: the reliable evaluation of generated code artifacts. Traditional LLMs, though proficient in text generation, face hurdles in accurately assessing code due to intrinsic differences between natural languages and programming languages. This paper proposes a sophisticated metric, CodeBERTScore, which extends BERTScore to evaluate generated code by leveraging pretrained models of code.

Core Proposals and Methodologies

  1. CodeBERTScore Introduction: The paper introduces CodeBERTScore as an innovative metric specifically tuned for evaluating code generation. It utilizes the semantic capabilities of pretrained LLMs designed for coding tasks, specifically CodeBERT. The paradigm shift from traditional token-matching techniques enables the capturing of the nuanced semantics embedded in code beyond mere syntactic similarity.
  2. Evaluation Framework: The paper outlines a framework wherein CodeBERTScore measures the cosine similarity between encoded representations of candidate and reference code snippets. Distinctively, it reserves space for incorporating natural language context in evaluations, thus providing a holistic view of the code's intention and implementation.
  3. Empirical Validation: The authors engage in extensive empirical testing across multiple programming languages—Java, Python, C++, and JavaScript—to establish the strength of CodeBERTScore. They contend that this metric correlates more strongly with human preference and functional correctness than existing metrics like BLEU, CodeBLEU, and CrystalBLEU.
  4. Functional Correctness & Distinguishability: Practical evaluations were executed on both the HumanEval benchmark for functional correctness and a dataset with semantically equivalent code classes derived from ShareCode, demonstrating CodeBERTScore's ability to discern semantically equivalent code with a distinguishability metric significantly surpassing conventional measures.

Noteworthy Results

  • Correlation Metrics: For human preference and functional correctness, CodeBERTScore consistently achieves superior correlation metrics, indicating closer alignment with human evaluators and performance benchmarks respectively.
  • Download Statistics: The paper mentions that their language-specific models were downloaded over a million times. This statistic underpins the practicable impact and utility of CodeBERTScore in the community.

Implications and Future Prospects

The introduction of CodeBERTScore sets a precedent for holistic, context-aware evaluation of code generation models. Its ability to factor in the semantics captured by training on code datasets represents a fundamental enhancement over syntactic n-gram approaches. This may prompt subsequent works to explore further contextual and semantics-driven evaluation metrics in code generation, potentially spurring advancements in automated programming tools and facilitating more refined AI-driven code synthesis.

Future research directions could include the expansion of CodeBERTScore to other programming languages not initially included, investigating its adaptability and efficiency. Additionally, addressing the computational overhead implied by the necessity of GPU resources opens another avenue for optimization, ensuring wider applicability across various environments.

In sum, the paper presents a compelling method for assessing code generation, marrying pretrained code models with robust evaluation frameworks. In doing so, it makes a valuable contribution to the evaluation methodologies in the AI and software engineering research communities, heralding a shift towards more meaningful and nuanced evaluations of code.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shuyan Zhou (28 papers)
  2. Uri Alon (40 papers)
  3. Sumit Agarwal (6 papers)
  4. Graham Neubig (342 papers)
Citations (74)
Github Logo Streamline Icon: https://streamlinehq.com