Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 76 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Better Debugging: Combining Static Analysis and LLMs for Explainable Crashing Fault Localization (2408.12070v1)

Published 22 Aug 2024 in cs.SE

Abstract: Nowadays, many applications do not exist independently but rely on various frameworks or libraries. The frequent evolution and the complex implementation of framework APIs induce many unexpected post-release crashes. Starting from the crash stack traces, existing approaches either perform direct call graph (CG) tracing or construct datasets with similar crash-fixing records to locate buggy methods. However, these approaches are limited by the completeness of CG or dependent on historical fixing records. Moreover, they fail to explain the buggy candidates by revealing their relationship with the crashing point. To fill the gap, we propose an explainable crashing fault localization approach by combining static analysis and LLM techniques. Our primary insight is that understanding the semantics of exception-throwing statements in the framework code can help find and apprehend the buggy methods in the app code. Based on this idea, first, we design the exception-thrown summary (ETS) that describes the key elements related to each framework-specific exception and extract ETSs by performing static analysis. Then we make data-tracking of its key elements to identify and sort buggy candidates for the given crash. After that, we introduce LLMs to improve the explainability of the localization results. To construct effective LLM prompts, we design the candidate information summary (CIS) that describes multiple types of explanation-related contexts and then extract CISs via static analysis. We apply our approach to one typical scenario, i.e., locating Android framework-specific crashing faults, and implement a tool CrashTracker. For fault localization, it exhibited an overall MRR value of 0.91 in precision. For fault explanation, compared to the naive one produced by static analysis only, the LLM-powered explanation achieved a 67.04% improvement in users' satisfaction score.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.