Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-lingual Evaluation of Code Generation Models (2210.14868v3)

Published 26 Oct 2022 in cs.LG and cs.CL

Abstract: We present new benchmarks on evaluation code generation models: MBXP and Multilingual HumanEval, and MathQA-X. These datasets cover over 10 programming languages and are generated using a scalable conversion framework that transpiles prompts and test cases from the original Python datasets into the corresponding data in the target language. Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of LLMs on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings. Furthermore, we use our code generation model to perform large-scale bootstrapping to obtain synthetic canonical solutions in several languages, which can be used for other code-related evaluations such as code insertion, robustness, or summarization tasks. Overall, our benchmarks represents a significant step towards a deeper understanding of LLMs' code generation abilities. We publicly release our code and datasets at https://github.com/amazon-research/mxeval.

Multi-lingual Evaluation of Code Generation Models: A Summary

The paper focuses on the evaluation of code generation models across multiple programming languages, highlighting the development and introduction of new benchmarks—MBXP, Multilingual HumanEval, and MathQA-X. These benchmarks aim to facilitate the understanding and assessment of code generation models in a multilingual context. The research leverages a scalable framework to transcribe existing Python datasets into more than ten other languages, providing a comprehensive basis for performance evaluation of these models' multilingual capabilities.

Core Contributions

  1. Benchmarks and Dataset Conversion: The authors present a robust framework capable of converting execution-based evaluation datasets from Python to multiple programming languages, ensuring a scalable approach to dataset preparation. They focus specifically on function completion-style tasks and have managed to convert these tasks while maintaining the necessary testing infrastructure to ensure functional correctness across languages.
  2. Comprehensive Evaluation: The paper conducts a large-scale evaluation using several trained models, ranging from 125 million to 13 billion parameters, to compare multi-lingual models against mono-lingual ones. This is executed through various tasks such as in-domain and out-of-domain code generation, few-shot prompting, zero-shot translation, and robustness assessments.
  3. Synthetic Solutions: Through the use of large-scale bootstrapping, the paper generates synthetic canonical solutions in new languages, extending the applicability of these datasets for tasks like code insertion and summarization.

Key Findings

  • Multi-lingual models generally outperform mono-lingual counterparts when the model size reaches a sufficiently large capacity, demonstrating the value of trained models across diverse languages.
  • The presence of cross-language knowledge spillover, where data from one language is embedded within another, contributes significantly to the models’ out-of-domain language capabilities. This phenomenon facilitates the generation of syntactically and semantically correct programs even in languages not specifically targeted during training.
  • Few-shot prompting significantly enhances the models' abilities to generate syntactically proper code for out-of-domain languages, reducing non-assertion errors such as syntax or compilation errors.
  • LLMs exhibit notable zero-shot translation abilities, where reference solutions in one language improve function completion in another. This is observable even on mono-lingual models, suggesting that knowledge of the target language is a critical factor in translation performance.

Implications and Future Directions

The work highlights both practical and theoretical implications. Practically, the findings suggest that training large-scale, multi-lingual models could be more efficient and effective than maintaining several specialized models, enabling better cross-lingual support systems for developers. Theoretically, the paper sheds light on the potential mechanisms of knowledge transfer within code generation models, providing a foundation for future research in LLM generalization across domains.

The framework established in this research also holds promise for further developments in AI. This includes enhancing the robustness of code models through rigorous perturbation tests and augmenting the translation capabilities among non-programming languages, applicable in diverse real-world scenarios. The extension to more programming languages and the exploration of compositionality across language pairs are natural next steps in this line of research.

The release of these benchmarks and datasets provides a platform for future research, potentially paving the way for innovative development in code generation and program synthesis. Researchers can leverage this work to explore new methodologies for inherently multi-lingual systems, making strides toward more sophisticated integrative AI solutions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (25)
  1. Ben Athiwaratkun (28 papers)
  2. Sanjay Krishna Gouda (7 papers)
  3. Zijian Wang (99 papers)
  4. Xiaopeng Li (166 papers)
  5. Yuchen Tian (12 papers)
  6. Ming Tan (20 papers)
  7. Wasi Uddin Ahmad (41 papers)
  8. Shiqi Wang (162 papers)
  9. Qing Sun (44 papers)
  10. Mingyue Shang (13 papers)
  11. Sujan Kumar Gonugondla (6 papers)
  12. Hantian Ding (11 papers)
  13. Varun Kumar (35 papers)
  14. Nathan Fulton (7 papers)
  15. Arash Farahani (1 paper)
  16. Siddhartha Jain (21 papers)
  17. Robert Giaquinto (5 papers)
  18. Haifeng Qian (27 papers)
  19. Murali Krishna Ramanathan (13 papers)
  20. Ramesh Nallapati (38 papers)
Citations (125)
Youtube Logo Streamline Icon: https://streamlinehq.com