Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Measuring The Impact Of Programming Language Distribution (2302.01973v3)

Published 3 Feb 2023 in cs.LG, cs.CL, and cs.PL

Abstract: Current benchmarks for evaluating neural code models focus on only a small subset of programming languages, excluding many popular languages such as Go or Rust. To ameliorate this issue, we present the BabelCode framework for execution-based evaluation of any benchmark in any language. BabelCode enables new investigations into the qualitative performance of models' memory, runtime, and individual test case results. Additionally, we present a new code translation dataset called Translating Python Programming Puzzles (TP3) from the Python Programming Puzzles (Schuster et al. 2021) benchmark that involves translating expert-level python functions to any language. With both BabelCode and the TP3 benchmark, we investigate if balancing the distributions of 14 languages in a training dataset improves a LLM's performance on low-resource languages. Training a model on a balanced corpus results in, on average, 12.34% higher $pass@k$ across all tasks and languages compared to the baseline. We find that this strategy achieves 66.48% better $pass@k$ on low-resource languages at the cost of only a 12.94% decrease to high-resource languages. In our three translation tasks, this strategy yields, on average, 30.77% better low-resource $pass@k$ while having 19.58% worse high-resource $pass@k$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Gabriel Orlanski (5 papers)
  2. Kefan Xiao (7 papers)
  3. Xavier Garcia (36 papers)
  4. Jeffrey Hui (6 papers)
  5. Joshua Howland (5 papers)
  6. Jonathan Malmaud (6 papers)
  7. Jacob Austin (15 papers)
  8. Rishabh Singh (58 papers)
  9. Michele Catasta (9 papers)
Citations (21)