Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Changeset-based Bug Localization with BERT (2112.14169v2)

Published 28 Dec 2021 in cs.SE

Abstract: Automatically localizing software bugs to the changesets that induced them has the potential to improve software developer efficiency and to positively affect software quality. To facilitate this automation, a bug report has to be effectively matched with source code changes, even when a significant lexical gap exists between natural language used to describe the bug and identifier naming practices used by developers. To bridge this gap, we need techniques that are able to capture software engineering-specific and project-specific semantics in order to detect relatedness between the two types of documents that goes beyond exact term matching. Popular transformer-based deep learning architectures, such as BERT, excel at leveraging contextual information, hence appear to be a suitable candidate for the task. However, BERT-like models are computationally expensive, which precludes them from being used in an environment where response time is important. In this paper, we describe how BERT can be made fast enough to be applicable to changeset-based bug localization. We also explore several design decisions in using BERT for this purpose, including how best to encode changesets and how to match bug reports to individual changes for improved accuracy. We compare the accuracy and performance of our model to a non-contextual baseline (i.e., vector space model) and BERT-based architectures previously used in software engineering. Our evaluation results demonstrate advantages in using the proposed BERT model compared to the baselines, especially for bug reports that lack any hints about related code elements.

Citations (47)

Summary

We haven't generated a summary for this paper yet.