Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models (2309.01940v4)

Published 5 Sep 2023 in cs.CL and cs.AI

Abstract: With the emergence of LLMs, there has been a significant improvement in the programming capabilities of models, attracting growing attention from researchers. Evaluating the programming capabilities of LLMs is crucial as it reflects the multifaceted abilities of LLMs, and it has numerous downstream applications. In this paper, we propose CodeApex, a bilingual benchmark dataset focusing on the programming comprehension, code generation, and code correction abilities of LLMs. Programming comprehension task tests LLMs on multiple-choice exam questions covering conceptual understanding, commonsense reasoning, and multi-hop reasoning. The code generation task evaluates LLMs through completing C++ functions based on provided descriptions and prototypes. The code correction task asks LLMs to fix real-world erroneous code segments with different error messages. We evaluate 12 widely used LLMs, including both general-purpose and specialized models. GPT-4 exhibits the best programming capabilities, achieving approximate accuracy of 69%, 54%, and 66% on the three tasks, respectively. Compared to human performance, there is still significant room for improvement in LLM programming. We hope that CodeApex can serve as a reference for evaluating the coding capabilities of LLMs, further promoting their development and growth.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Lingyue Fu (8 papers)
  2. Huacan Chai (4 papers)
  3. Shuang Luo (10 papers)
  4. Kounianhua Du (17 papers)
  5. Weiming Zhang (135 papers)
  6. Longteng Fan (2 papers)
  7. Jiayi Lei (7 papers)
  8. Renting Rui (5 papers)
  9. Jianghao Lin (47 papers)
  10. Yuchen Fang (30 papers)
  11. Yifan Liu (134 papers)
  12. Jingkuan Wang (1 paper)
  13. Siyuan Qi (34 papers)
  14. Kangning Zhang (7 papers)
  15. Weinan Zhang (322 papers)
  16. Yong Yu (219 papers)
Citations (9)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub