Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Repoformer: Selective Retrieval for Repository-Level Code Completion (2403.10059v2)

Published 15 Mar 2024 in cs.SE and cs.CL

Abstract: Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion. However, the invariable use of retrieval in existing methods exposes issues in both efficiency and robustness, with a large proportion of the retrieved contexts proving unhelpful or harmful to code LLMs (code LMs). In this paper, we propose a selective RAG framework to avoid retrieval when unnecessary. To power this framework, we design a self-supervised learning approach to enable a code LM to accurately self-evaluate whether retrieval can improve its output quality and robustly leverage the potentially noisy retrieved contexts. Using this LM as both the selective RAG policy and the generation model, our framework achieves state-of-the-art repository-level code completion performance on diverse benchmarks including RepoEval, CrossCodeEval, and CrossCodeLongEval, a new long-form code completion benchmark. Meanwhile, our analyses show that selectively retrieving brings as much as 70% inference speedup in the online serving setting without harming the performance. We further demonstrate that our framework is able to accommodate different generation models, retrievers, and programming languages. These advancements position our framework as an important step towards more accurate and efficient repository-level code completion.

Repoformer: Advancing Efficiency in Repository-Level Code Completion with Selective Retrieval

Introduction to Selective Retrieval in RAG

In the field of code completion, particularly at the repository level, the integration of retrieval-augmented generation (RAG) techniques has been instrumental. These methods leverage contextually relevant code snippets or documentation from the same repository to enhance the predictive accuracy of code LLMs (code LMs). Traditional RAG-based approaches invariably utilize retrieval, assuming it always contributes positively to the completion task. This paper introduces a paradigm shift by questioning and subsequently disproving this assumption. The proposed framework, centered around Repoformer, employs selective retrieval, invoking it only when deemed beneficial, thus optimizing both robustness and efficiency in repository-level code completion.

The Dilemma of Invariable Retrieval

Empirical evidence suggests a substantial portion of retrievals in existing methods does not enhance, and can even degrade, the performance of code LMs. Analysis across diverse repository-level code completion tasks reveals that retrievals improve code LM performance in only 20\% or less of the instances. Notably, a significant number of retrievals introduce inefficiencies or irrelevant information determental to the task at hand. These findings underline the inefficacy of the 'invariable retrieval' strategy, demanding a more discerning approach to leveraging retrieved contexts.

Repoformer: A Solution to the Invariable Retrieval Issue

Repoformer epitomizes a novel approach to intelligently circumvent unnecessary retrievals. By self-evaluating the potential improvement retrieval might bring to a specific completion task, Repoformer allows for a more sophisticated, need-based engagement with retrieval mechanisms. This self-selective methodology not only enhances model performance across various benchmarks but also exhibits a marked improvement in efficiency, achieving up to 70\% inference speedups without compromising performance quality.

Three core principles underpin Repoformer:

  • Performance-oriented self-evaluation: determining the need for retrieval based not only on whether the model already possesses the requisite knowledge for code completion but also on the relevance and utility of additional context that retrieval might offer.
  • Robustness to retrieved contexts: an enhanced ability to leverage meaningful context when available and disregard it when not, minimizing potential performance degradation from unhelpful retrievals.
  • Generalizability: the proficiency to operate across different languages, retrievers, and generative models, ensuring its scalable application and facilitation as a plug-and-play solution for existing code LMs.

Empirical Validation and Analysis

Repoformer's effectiveness is rigorously evaluated through comprehensive benchmarks, including RepoEval, CrossCodeEval, and a newly introduced large-scale benchmark. The model consistently outperforms state-of-the-art methods, demonstrating superior accuracy and efficiency. Additional analyses reveal Repoformer's calibrated decision-making process in selective retrieval, its enhanced robustness to retrieved context, and its flexibility in accommodating various threshold settings for optimal performance-latency trade-offs. Moreover, Repoformer's capacity to augment existing code LMs with selective RAG capabilities underscores its utility in broadening the horizons of efficient repository-level code completion.

Concluding Remarks

The contributions of Repoformer extend beyond the immediate enhancements in repository-level code completion. By challenging the conventional wisdom of invariable retrieval, it lays the groundwork for more discerning, efficiency-oriented approaches to augmenting code LMs. The advancements presented hold promise for refining programming environments, fostering more sustainable coding practices, and facilitating continual improvement in automated code completion technologies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Di Wu (477 papers)
  2. Wasi Uddin Ahmad (41 papers)
  3. Dejiao Zhang (20 papers)
  4. Murali Krishna Ramanathan (13 papers)
  5. Xiaofei Ma (31 papers)
Citations (14)