What to Retrieve for Effective Retrieval-Augmented Code Generation? An Empirical Study and Beyond (2503.20589v1)
Abstract: Repository-level code generation remains challenging due to complex code dependencies and the limitations of LLMs in processing long contexts. While retrieval-augmented generation (RAG) frameworks are widely adopted, the effectiveness of different retrieved information sources-contextual code, APIs, and similar snippets-has not been rigorously analyzed. Through an empirical study on two benchmarks, we demonstrate that in-context code and potential API information significantly enhance LLM performance, whereas retrieved similar code often introduces noise, degrading results by up to 15%. Based on the preliminary results, we propose AllianceCoder, a novel context-integrated method that employs chain-of-thought prompting to decompose user queries into implementation steps and retrieves APIs via semantic description matching. Through extensive experiments on CoderEval and RepoExec, AllianceCoder achieves state-of-the-art performance, improving Pass@1 by up to 20% over existing approaches.
- Wenchao Gu (10 papers)
- Juntao Chen (45 papers)
- Yanlin Wang (76 papers)
- Tianyue Jiang (2 papers)
- Xingzhe Li (6 papers)
- Mingwei Liu (21 papers)
- Xilin Liu (26 papers)
- Yuchi Ma (22 papers)
- Zibin Zheng (194 papers)