Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models (2309.17446v2)

Published 29 Sep 2023 in cs.CL, cs.LG, cs.PL, and cs.SE

Abstract: Recently, LLMs, especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs in a few-shot or even zero-shot manner. Despite promising results, there is a notable lack of a comprehensive evaluation of these models language-to-code generation capabilities. Existing studies often focus on specific tasks, model architectures, or learning paradigms, leading to a fragmented understanding of the overall landscape. In this work, we present L2CEval, a systematic evaluation of the language-to-code generation capabilities of LLMs on 7 tasks across the domain spectrum of semantic parsing, math reasoning and Python programming, analyzing the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods. In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs. This enables us to identify and analyze the typical failure modes across various tasks and models. L2CEval offers a comprehensive understanding of the capabilities and limitations of LLMs in language-to-code generation. We also release the evaluation framework and all model outputs, hoping to lay the groundwork for further future research in this domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Ansong Ni (17 papers)
  2. Pengcheng Yin (42 papers)
  3. Yilun Zhao (59 papers)
  4. Martin Riddell (4 papers)
  5. Troy Feng (2 papers)
  6. Rui Shen (12 papers)
  7. Stephen Yin (1 paper)
  8. Ye Liu (153 papers)
  9. Semih Yavuz (43 papers)
  10. Caiming Xiong (337 papers)
  11. Shafiq Joty (187 papers)
  12. Yingbo Zhou (81 papers)
  13. Dragomir Radev (98 papers)
  14. Arman Cohan (121 papers)
Citations (12)