Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Evaluation of Competitive Programming AI: A Case Study of AlphaCode (2208.08603v2)

Published 18 Aug 2022 in cs.SE

Abstract: AlphaCode is a code generation system for assisting software developers in solving competitive programming problems using natural language problem descriptions. Despite the advantages of the code generating system, the open source community expressed concerns about practicality and data licensing. However, there is no research investigating generated codes in terms of code clone and performance. In this paper, we conduct an empirical study to find code similarities and performance differences between AlphaCode-generated codes and human codes. The results show that (i) the generated codes from AlphaCode are similar to human codes (i.e., the average maximum similarity score is 0.56) and (ii) the generated code performs on par with or worse than the human code in terms of execution time and memory usage. Moreover, AlphaCode tends to generate more similar codes to humans for low-difficulty problems (i.e., four cases have the exact same codes). It also employs excessive nested loops and unnecessary variable declarations for high-difficulty problems, which cause low performance regarding our manual investigation. The replication package is available at https:/doi.org/10.5281/zenodo.6820681

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sila Lertbanjongngam (2 papers)
  2. Bodin Chinthanet (11 papers)
  3. Takashi Ishio (33 papers)
  4. Raula Gaikovina Kula (83 papers)
  5. Pattara Leelaprute (5 papers)
  6. Bundit Manaskasemsak (2 papers)
  7. Arnon Rungsawang (4 papers)
  8. Kenichi Matsumoto (73 papers)
Citations (16)