Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HatCUP: Hybrid Analysis and Attention based Just-In-Time Comment Updating (2205.00600v1)

Published 2 May 2022 in cs.SE

Abstract: When changing code, developers sometimes neglect updating the related comments, bringing inconsistent or outdated comments. These comments increase the cost of program understanding and greatly reduce software maintainability. Researchers have put forward some solutions, such as CUP and HEBCUP, which update comments efficiently for simple code changes (i.e. modifying of a single token), but not good enough for complex ones. In this paper, we propose an approach, named HatCUP (Hybrid Analysis and Attention based Comment UPdater), to provide a new mechanism for comment updating task. HatCUP pays attention to hybrid analysis and information. First, HatCUP considers the code structure change information and introduces a structure-guided attention mechanism combined with code change graph analysis and optimistic data flow dependency analysis. With a generally popular RNN-based encoder-decoder architecture, HatCUP takes the action of the code edits, the syntax, semantics and structure code changes, and old comments as inputs and generates a structural representation of the changes in the current code snippet. Furthermore, instead of directly generating new comments, HatCUP proposes a new edit or non-edit mechanism to mimic human editing behavior, by generating a sequence of edit actions and constructing a modified RNN model to integrate newly developed components. Evaluation on a popular dataset demonstrates that HatCUP outperforms the state-of-the-art deep learning-based approaches (CUP) by 53.8% for accuracy, 31.3% for recall and 14.3% for METEOR of the original metrics. Compared with the heuristic-based approach (HEBCUP), HatCUP also shows better overall performance.

Citations (4)

Summary

We haven't generated a summary for this paper yet.