Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ESALE: Enhancing Code-Summary Alignment Learning for Source Code Summarization (2407.01646v1)

Published 1 Jul 2024 in cs.SE and cs.AI

Abstract: (Source) code summarization aims to automatically generate succinct natural language summaries for given code snippets. Such summaries play a significant role in promoting developers to understand and maintain code. Inspired by neural machine translation, deep learning-based code summarization techniques widely adopt an encoder-decoder framework, where the encoder transforms given code snippets into context vectors, and the decoder decodes context vectors into summaries. Recently, large-scale pre-trained models for source code are equipped with encoders capable of producing general context vectors and have achieved substantial improvements on code summarization. However, although they are usually trained mainly on code-focused tasks and can capture general code features, they still fall short in capturing specific features that need to be summarized. This paper proposes a novel approach to improve code summarization based on summary-focused tasks. Specifically, we exploit a multi-task learning paradigm to train the encoder on three summary-focused tasks to enhance its ability to learn code-summary alignment, including unidirectional LLMing (ULM), masked LLMing (MLM), and action word prediction (AWP). Unlike pre-trained models that mainly predict masked tokens in code snippets, we design ULM and MLM to predict masked words in summaries. Intuitively, predicting words based on given code snippets would help learn the code-summary alignment. Additionally, we introduce the domain-specific task AWP to enhance the ability of the encoder to learn the alignment between action words and code snippets. The extensive experiments on four datasets demonstrate that our approach, called ESALE significantly outperforms baselines in all three widely used metrics, including BLEU, METEOR, and ROUGE-L.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Chunrong Fang (71 papers)
  2. Weisong Sun (45 papers)
  3. Yuchen Chen (18 papers)
  4. Xiao Chen (277 papers)
  5. Zhao Wei (13 papers)
  6. Quanjun Zhang (36 papers)
  7. Yudu You (5 papers)
  8. Bin Luo (209 papers)
  9. Yang Liu (2253 papers)
  10. Zhenyu Chen (91 papers)
Citations (5)