Improving Code Search with Hard Negative Sampling Based on Fine-tuning (2305.04508v2)
Abstract: Pre-trained code models have emerged as the state-of-the-art paradigm for code search tasks. The paradigm involves pre-training the model on search-irrelevant tasks such as masked LLMing, followed by the fine-tuning stage, which focuses on the search-relevant task. The typical fine-tuning method is to employ a dual-encoder architecture to encode semantic embeddings of query and code separately, and then calculate their similarity based on the embeddings. However, the typical dual-encoder architecture falls short in modeling token-level interactions between query and code, which limits the capabilities of model. To address this limitation, we introduce a cross-encoder architecture for code search that jointly encodes the concatenation of query and code. We further introduce a Retriever-Ranker (RR) framework that cascades the dual-encoder and cross-encoder to promote the efficiency of evaluation and online serving. Moreover, we present a ranking-based hard negative sampling (PS) method to improve the ability of cross-encoder to distinguish hard negative codes, which further enhances the cascaded RR framework. Experiments on four datasets using three code models demonstrate the superiority of our proposed method. We have made the code available at https://github.com/DongHande/R2PS.