Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unity is Strength: Cross-Task Knowledge Distillation to Improve Code Review Generation (2309.03362v1)

Published 6 Sep 2023 in cs.SE

Abstract: Code review is a fundamental process in software development that plays a critical role in ensuring code quality and reducing the likelihood of errors and bugs. However, code review might be complex, subjective, and time-consuming. Comment generation and code refinement are two key tasks of this process and their automation has traditionally been addressed separately in the literature using different approaches. In this paper, we propose a novel deep-learning architecture, DISCOREV, based on cross-task knowledge distillation that addresses these two tasks simultaneously. In our approach, the fine-tuning of the comment generation model is guided by the code refinement model. We implemented this guidance using two strategies, feedback-based learning objective and embedding alignment objective. We evaluated our approach based on cross-task knowledge distillation by comparing it to the state-of-the-art methods that are based on independent training and fine-tuning. Our results show that our approach generates better review comments as measured by the BLEU score.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Oussama Ben Sghaier (6 papers)
  2. Lucas Maes (3 papers)
  3. Houari Sahraoui (31 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.