Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey (2310.17903v1)

Published 27 Oct 2023 in cs.SE and cs.AI

Abstract: Modern LLMs (LMs) have been successfully employed in source code generation and understanding, leading to a significant increase in research focused on learning-based code intelligence, such as automated bug repair, and test case generation. Despite their great potential, LLMs for code intelligence (LM4Code) are susceptible to potential pitfalls, which hinder realistic performance and further impact their reliability and applicability in real-world deployment. Such challenges drive the need for a comprehensive understanding - not just identifying these issues but delving into their possible implications and existing solutions to build more reliable LLMs tailored to code intelligence. Based on a well-defined systematic research approach, we conducted an extensive literature review to uncover the pitfalls inherent in LM4Code. Finally, 67 primary studies from top-tier venues have been identified. After carefully examining these studies, we designed a taxonomy of pitfalls in LM4Code research and conducted a systematic study to summarize the issues, implications, current solutions, and challenges of different pitfalls for LM4Code systems. We developed a comprehensive classification scheme that dissects pitfalls across four crucial aspects: data collection and labeling, system design and learning, performance evaluation, and deployment and maintenance. Through this study, we aim to provide a roadmap for researchers and practitioners, facilitating their understanding and utilization of LM4Code in reliable and trustworthy ways.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xinyu She (3 papers)
  2. Yue Liu (257 papers)
  3. Yanjie Zhao (39 papers)
  4. Yiling He (13 papers)
  5. Li Li (657 papers)
  6. Chakkrit Tantithamthavorn (49 papers)
  7. Zhan Qin (54 papers)
  8. Haoyu Wang (309 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.