Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enriching Query Semantics for Code Search with Reinforcement Learning (2105.09630v1)

Published 20 May 2021 in cs.SE

Abstract: Code search is a common practice for developers during software implementation. The challenges of accurate code search mainly lie in the knowledge gap between source code and natural language (i.e., queries). Due to the limited code-query pairs and large code-description pairs available, the prior studies based on deep learning techniques focus on learning the semantic matching relation between source code and corresponding description texts for the task, and hypothesize that the semantic gap between descriptions and user queries is marginal. In this work, we found that the code search models trained on code-description pairs may not perform well on user queries, which indicates the semantic distance between queries and code descriptions. To mitigate the semantic distance for more effective code search, we propose QueCos, a Query-enriched Code search model. QueCos learns to generate semantic enriched queries to capture the key semantics of given queries with reinforcement learning (RL). With RL, the code search performance is considered as a reward for producing accurate semantic enriched queries. The enriched queries are finally employed for code search. Experiments on the benchmark datasets show that QueCos can significantly outperform the state-of-the-art code search models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chaozheng Wang (28 papers)
  2. Zhenghao Nong (1 paper)
  3. Cuiyun Gao (97 papers)
  4. Zongjie Li (29 papers)
  5. Jichuan Zeng (10 papers)
  6. Zhenchang Xing (99 papers)
  7. Yang Liu (2253 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.