Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Execution-based Evaluation for Data Science Code Generation Models (2211.09374v1)

Published 17 Nov 2022 in cs.SE and cs.CL

Abstract: Code generation models can benefit data scientists' productivity by automatically generating code from context and text descriptions. An important measure of the modeling progress is whether a model can generate code that can correctly execute to solve the task. However, due to the lack of an evaluation dataset that directly supports execution-based model evaluation, existing work relies on code surface form similarity metrics (e.g., BLEU, CodeBLEU) for model selection, which can be inaccurate. To remedy this, we introduce ExeDS, an evaluation dataset for execution evaluation for data science code generation tasks. ExeDS contains a set of 534 problems from Jupyter Notebooks, each consisting of code context, task description, reference program, and the desired execution output. With ExeDS, we evaluate the execution performance of five state-of-the-art code generation models that have achieved high surface-form evaluation scores. Our experiments show that models with high surface-form scores do not necessarily perform well on execution metrics, and execution-based metrics can better capture model code generation errors. Source code and data can be found at https://github.com/Jun-jie-Huang/ExeDS

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Junjie Huang (73 papers)
  2. Chenglong Wang (80 papers)
  3. Jipeng Zhang (46 papers)
  4. Cong Yan (10 papers)
  5. Haotian Cui (6 papers)
  6. Jeevana Priya Inala (18 papers)
  7. Colin Clement (10 papers)
  8. Nan Duan (172 papers)
  9. Jianfeng Gao (344 papers)
Citations (27)