Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Static Evaluation of Code Completion by Large Language Models (2306.03203v1)

Published 5 Jun 2023 in cs.CL and cs.SE

Abstract: LLMs trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the contrary, static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with execution-based evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by LLMs. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Hantian Ding (11 papers)
  2. Varun Kumar (35 papers)
  3. Yuchen Tian (12 papers)
  4. Zijian Wang (99 papers)
  5. Rob Kwiatkowski (1 paper)
  6. Xiaopeng Li (166 papers)
  7. Murali Krishna Ramanathan (13 papers)
  8. Baishakhi Ray (88 papers)
  9. Parminder Bhatia (50 papers)
  10. Sudipta Sengupta (7 papers)
  11. Dan Roth (222 papers)
  12. Bing Xiang (74 papers)
Citations (14)