Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code (2306.04556v1)

Published 7 Jun 2023 in cs.LG, cs.HC, and cs.SE

Abstract: Code LLMs are being rapidly deployed and there is evidence that they can make professional programmers more productive. Current benchmarks for code generation measure whether models generate correct programs given an expert prompt. In this paper, we present a new benchmark containing multiple prompts per problem, written by a specific population of non-expert prompters: beginning programmers. StudentEval contains 1,749 prompts for 48 problems, written by 80 students who have only completed one semester of Python programming. Our students wrote these prompts while working interactively with a Code LLM, and we observed very mixed success rates. We use StudentEval to evaluate 5 Code LLMs and find that StudentEval is a better discriminator of model performance than existing benchmarks. We analyze the prompts and find significant variation in students' prompting techniques. We also find that nondeterministic LLM sampling could mislead students into thinking that their prompts are more (or less) effective than they actually are, which has implications for how to teach with Code LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hannah McLean Babe (2 papers)
  2. Sydney Nguyen (3 papers)
  3. Yangtian Zi (6 papers)
  4. Arjun Guha (44 papers)
  5. Molly Q Feldman (7 papers)
  6. Carolyn Jane Anderson (15 papers)
Citations (24)