Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Code Search with Hard Negative Sampling Based on Fine-tuning (2305.04508v2)

Published 8 May 2023 in cs.SE

Abstract: Pre-trained code models have emerged as the state-of-the-art paradigm for code search tasks. The paradigm involves pre-training the model on search-irrelevant tasks such as masked LLMing, followed by the fine-tuning stage, which focuses on the search-relevant task. The typical fine-tuning method is to employ a dual-encoder architecture to encode semantic embeddings of query and code separately, and then calculate their similarity based on the embeddings. However, the typical dual-encoder architecture falls short in modeling token-level interactions between query and code, which limits the capabilities of model. To address this limitation, we introduce a cross-encoder architecture for code search that jointly encodes the concatenation of query and code. We further introduce a Retriever-Ranker (RR) framework that cascades the dual-encoder and cross-encoder to promote the efficiency of evaluation and online serving. Moreover, we present a ranking-based hard negative sampling (PS) method to improve the ability of cross-encoder to distinguish hard negative codes, which further enhances the cascaded RR framework. Experiments on four datasets using three code models demonstrate the superiority of our proposed method. We have made the code available at https://github.com/DongHande/R2PS.

Citations (2)

Summary

We haven't generated a summary for this paper yet.