Automatically Testing Functional Properties of Code Translation Models (2309.12813v2)
Abstract: LLMs are becoming increasingly practical for translating code across programming languages, a process known as $transpiling$. Even though automated transpilation significantly boosts developer productivity, a key concern is whether the generated code is correct. Existing work initially used manually crafted test suites to test the translations of a small corpus of programs; these test suites were later automated. In contrast, we devise the first approach for automated, functional, property-based testing of code translation models. Our general, user-provided specifications about the transpiled code capture a range of properties, from purely syntactic to purely semantic ones. As shown by our experiments, this approach is very effective in detecting property violations in popular code translation models, and therefore, in evaluating model quality with respect to given properties. We also go a step further and explore the usage scenario where a user simply aims to obtain a correct translation of some code with respect to certain properties without necessarily being concerned about the overall quality of the model. To this purpose, we develop the first property-guided search procedure for code translation models, where a model is repeatedly queried with slightly different parameters to produce alternative and potentially more correct translations. Our results show that this search procedure helps to obtain significantly better code translations.
- Multi-Lingual Evaluation of Code Generation Models. In ICLR. OpenReview.net.
- Program Synthesis with Large Language Models. CoRR, abs/2108.07732.
- Prompting Is Programming: A Query Language For Large Language Models. In PLDI. ACM. To appear.
- MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation. TSE, 49: 3675–3691.
- Evaluating Large Language Models Trained on Code. CoRR, abs/2107.03374.
- Metamorphic Testing: A New Approach for Generating Next Test Cases. Technical Report HKUST–CS98–01, HKUST.
- Specifying and Testing k-Safety Properties for Machine-Learning Models. In IJCAI. To appear.
- Hyperproperties. In CSF, 51–65. IEEE Computer Society.
- DOBF: A Deobfuscation Pre-Training Objective for Programming Languages. In NeurIPS, 14967–14979.
- StarCoder: May the Source Be with You! CoRR, abs/2305.06161.
- Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation. CoRR, abs/2305.01210.
- Training Language Models to Follow Instructions with Human Feedback. In NeurIPS.
- Synchromesh: Reliable Code Generation from Pre-Trained Language Models. In ICLR. OpenReview.net.
- Unsupervised Translation of Programming Languages. In NeurIPS.
- Leveraging Automated Unit Tests for Unsupervised Code Translation. In ICLR. OpenReview.net.
- PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. In EMNLP, 9895–9901. ACM.
- A Survey on Metamorphic Testing. TSE, 42: 805–824.
- Constrained Language Models Yield Few-Shot Semantic Parsers. In EMNLP, 7699–7715. ACM.
- Code Translation with Compiler Representations. In ICLR. OpenReview.net.
- ReCode: Robustness Evaluation of Code Generation Models. In ACL, 13818–13843. ACL.
- Hasan Ferit Eniser (8 papers)
- Valentin Wüstholz (17 papers)
- Maria Christakis (20 papers)