Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CodeEditorBench: Evaluating Code Editing Capability of Large Language Models (2404.03543v2)

Published 4 Apr 2024 in cs.SE, cs.AI, cs.CL, and cs.LG

Abstract: LLMs for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  2. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021a.
  3. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021b.
  4. A framework for the evaluation of code generation models. https://github.com/bigcode-project/bigcode-evaluation-harness, 2022.
  5. Large language models as tool makers. arXiv preprint arXiv:2305.17126, 2023.
  6. Multipl-e: A scalable and extensible approach to benchmarking neural code generation. arXiv preprint arXiv:2208.08227, 2022.
  7. Evaluating large language models trained on code, 2021.
  8. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  9. Incoder: A generative model for code infilling and synthesis, 2023.
  10. Deepseek-coder: When the large language model meets programming – the rise of code intelligence, 2024.
  11. Zhang Hao-bin. Design and implementation of the open cloud platform based open source online judge system. Computer Science, 2012. URL https://api.semanticscholar.org/CorpusID:57976285.
  12. Instructcoder: Empowering language models for code editing, 2023.
  13. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024.
  14. Spoc: Search-based pseudocode to code, 2019.
  15. Efficient memory management for large language model serving with pagedattention, 2023.
  16. Starcoder: may the source be with you!, 2023a.
  17. Taco: Topics in algorithmic code generation dataset. arXiv preprint arXiv:2312.14852, 2023b.
  18. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814, 2022.
  19. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation, 2023.
  20. Starcoder 2 and the stack v2: The next generation, 2024.
  21. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664, 2021.
  22. Codegen2: Lessons for training llms on programming and natural languages. arXiv preprint arXiv:2305.02309, 2023a.
  23. Codegen: An open large language model for code with multi-turn program synthesis, 2023b.
  24. Demystifying gpt self-repair for code generation. arXiv preprint arXiv:2306.09896, 2023.
  25. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks, 2021.
  26. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023.
  27. Code llama: Open foundation models for code, 2024.
  28. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023.
  29. Is chatgpt the ultimate programming assistant–how far is it? arXiv preprint arXiv:2304.11938, 2023.
  30. Debugbench: Evaluating debugging capability of large language models, 2024.
  31. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  32. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation, 2021.
  33. Codet5+: Open code large language models for code understanding and generation, 2023.
  34. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887, 2018.
  35. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x, 2023.
  36. Opencodeinterpreter: Integrating code generation with execution and refinement. arXiv preprint arXiv:2402.14658, 2024.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Jiawei Guo (16 papers)
  2. Ziming Li (44 papers)
  3. Xueling Liu (5 papers)
  4. Kaijing Ma (12 papers)
  5. Tianyu Zheng (28 papers)
  6. Zhouliang Yu (8 papers)
  7. Ding Pan (30 papers)
  8. Ruibo Liu (42 papers)
  9. Yue Wang (675 papers)
  10. Shuyue Guo (10 papers)
  11. Xingwei Qu (30 papers)
  12. Xiang Yue (72 papers)
  13. Ge Zhang (170 papers)
  14. Wenhu Chen (134 papers)
  15. Jie Fu (229 papers)
  16. Yizhi Li (43 papers)
Citations (4)