Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Represent Patches (2308.16586v2)

Published 31 Aug 2023 in cs.SE

Abstract: Patch representation is crucial in automating various software engineering tasks, like determining patch accuracy or summarizing code changes. While recent research has employed deep learning for patch representation, focusing on token sequences or Abstract Syntax Trees (ASTs), they often miss the change's semantic intent and the context of modified lines. To bridge this gap, we introduce a novel method, Patcherizer. It delves into the intentions of context and structure, merging the surrounding code context with two innovative representations. These capture the intention in code changes and the intention in AST structural modifications pre and post-patch. This holistic representation aptly captures a patch's underlying intentions. Patcherizer employs graph convolutional neural networks for structural intention graph representation and transformers for intention sequence representation. We evaluated Patcherizer's embeddings' versatility in three areas: (1) Patch description generation, (2) Patch accuracy prediction, and (3) Patch intention identification. Our experiments demonstrate the representation's efficacy across all tasks, outperforming state-of-the-art methods. For example, in patch description generation, Patcherizer excels, showing an average boost of 19.39% in BLEU, 8.71% in ROUGE-L, and 34.03% in METEOR scores.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xunzhu Tang (23 papers)
  2. Haoye Tian (27 papers)
  3. Zhenghan Chen (12 papers)
  4. Weiguo Pian (12 papers)
  5. Saad Ezzini (18 papers)
  6. Abdoul Kader Kabore (8 papers)
  7. Andrew Habib (9 papers)
  8. Jacques Klein (89 papers)
  9. Tegawende F. Bissyande (10 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.