Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation (2208.08227v4)

Published 17 Aug 2022 in cs.LG and cs.PL

Abstract: LLMs have demonstrated the ability to generate both natural language and programming language text. Such models open up the possibility of multi-language code generation: could code generation models generalize knowledge from one language to another? Although contemporary code generation models can generate semantically correct Python code, little is known about their abilities with other languages. We propose MultiPL-E, a system for translating unit test-driven code generation benchmarks to new languages. We create the first massively multilingual code generation benchmark by using MultiPL-E to translate two popular Python code generation benchmarks to 18 additional programming languages. We use MultiPL-E to extend the HumanEval benchmark and MBPP benchmark to 18 languages that encompass a range of programming paradigms and popularity. Using these new parallel benchmarks, we evaluate the multi-language performance of three state-of-the-art code generation models: Codex, CodeGen, and InCoder. We find that Codex matches or even exceeds its performance on Python for several other languages. The range of programming languages represented in MultiPL-E allow us to explore the impact of language frequency and language features on model performance. Finally, the MultiPL-E approach of compiling code generation benchmarks to new programming languages is both scalable and extensible, making it straightforward to evaluate new models, benchmarks, and languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Federico Cassano (16 papers)
  2. John Gouwar (3 papers)
  3. Daniel Nguyen (4 papers)
  4. Sydney Nguyen (3 papers)
  5. Luna Phipps-Costin (3 papers)
  6. Donald Pinckney (6 papers)
  7. Ming-Ho Yee (7 papers)
  8. Yangtian Zi (6 papers)
  9. Carolyn Jane Anderson (15 papers)
  10. Molly Q Feldman (7 papers)
  11. Arjun Guha (44 papers)
  12. Michael Greenberg (17 papers)
  13. Abhinav Jangda (13 papers)
Citations (60)