Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Language Model Meets Private Library (2210.17236v1)

Published 31 Oct 2022 in cs.PL, cs.CL, and cs.SE

Abstract: With the rapid development of pre-training techniques, a number of LLMs have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained LLMs with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for LLMs since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf LLMs, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Daoguang Zan (24 papers)
  2. Bei Chen (56 papers)
  3. Zeqi Lin (25 papers)
  4. Bei Guan (11 papers)
  5. Yongji Wang (21 papers)
  6. Jian-Guang Lou (69 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.