Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PPM: Automated Generation of Diverse Programming Problems for Benchmarking Code Generation Models (2401.15545v1)

Published 28 Jan 2024 in cs.SE, cs.AI, cs.CL, and cs.PL

Abstract: In recent times, a plethora of Large Code Generation Models (LCGMs) have been proposed, showcasing significant potential in assisting developers with complex programming tasks. Benchmarking LCGMs necessitates the creation of a set of diverse programming problems, and each problem comprises the prompt (including the task description), canonical solution, and test inputs. The existing methods for constructing such a problem set can be categorized into two main types: manual methods and perturbation-based methods. However, manual methods demand high effort and lack scalability, while also risking data integrity due to LCGMs' potentially contaminated data collection, and perturbation-based approaches mainly generate semantically homogeneous problems with the same canonical solutions and introduce typos that can be easily auto-corrected by IDE, making them ineffective and unrealistic. In this work, we propose the idea of programming problem merging (PPM) and provide two implementation of this idea, we utilize our tool on two widely-used datasets and compare it against nine baseline methods using eight code generation models. The results demonstrate the effectiveness of our tool in generating more challenging, diverse, and natural programming problems, comparing to the baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Simin Chen (21 papers)
  2. Xiaoning Feng (3 papers)
  3. Xiaohong Han (3 papers)
  4. Cong Liu (169 papers)
  5. Wei Yang (349 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com