Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Effectiveness of Large Language Models in Domain-Specific Code Generation (2312.01639v6)

Published 4 Dec 2023 in cs.SE

Abstract: LLMs such as ChatGPT have shown remarkable capabilities in code generation. Despite significant achievements, they rely on enormous training data to acquire a broad spectrum of open-domain knowledge. Besides, their evaluation revolves around open-domain benchmarks like HumanEval, which primarily consist of programming contests. Therefore, it is hard to fully characterize the intricacies and challenges associated with particular domains (e.g., web, game, and math). In this paper, we conduct an in-depth study of the LLMs in domain-specific code generation. Our results demonstrate that LLMs exhibit sub-optimal performance in generating domain-specific code, due to their limited proficiency in utilizing domain-specific libraries. We further observe that incorporating API knowledge as prompts can empower LLMs to generate more professional code. Based on these findings, we further investigate how to effectively incorporate API knowledge into the code generation process. We experiment with three strategies for incorporating domain knowledge, namely, external knowledge inquirer, chain-of-thought prompting, and chain-of-thought fine-tuning. We refer to these strategies as a new code generation approach called DomCoder. Experimental results show that all strategies of DomCoder lead to improvement in the effectiveness of domain-specific code generation under certain settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Meng Chen (98 papers)
  2. Hongyu Zhang (147 papers)
  3. Chengcheng Wan (14 papers)
  4. Zhao Wei (13 papers)
  5. Yong Xu (432 papers)
  6. Juhong Wang (5 papers)
  7. Xiaodong Gu (62 papers)
  8. Yalan Lin (3 papers)
  9. Yuhan Hu (6 papers)
Citations (8)