Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CodeExp: Explanatory Code Document Generation (2211.15395v1)

Published 25 Nov 2022 in cs.CL and cs.LG

Abstract: Developing models that can automatically generate detailed code explanation can greatly benefit software maintenance and programming education. However, existing code-to-text generation models often produce only high-level summaries of code that do not capture implementation-level choices essential for these scenarios. To fill in this gap, we propose the code explanation generation task. We first conducted a human study to identify the criteria for high-quality explanatory docstring for code. Based on that, we collected and refined a large-scale code docstring corpus and formulated automatic evaluation metrics that best match human assessments. Finally, we present a multi-stage fine-tuning strategy and baseline models for the task. Our experiments show that (1) our refined training dataset lets models achieve better performance in the explanation generation tasks compared to larger unrefined data (15x larger), and (2) fine-tuned models can generate well-structured long docstrings comparable to human-written ones. We envision our training dataset, human-evaluation protocol, recommended metrics, and fine-tuning strategy can boost future code explanation research. The code and annotated data are available at https://github.com/subercui/CodeExp.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Haotian Cui (6 papers)
  2. Chenglong Wang (80 papers)
  3. Junjie Huang (73 papers)
  4. Jeevana Priya Inala (18 papers)
  5. Todd Mytkowicz (13 papers)
  6. Bo Wang (823 papers)
  7. Jianfeng Gao (344 papers)
  8. Nan Duan (172 papers)
Citations (4)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub