Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explainable Artificial Intelligence Techniques for Software Development Lifecycle: A Phase-specific Survey

Published 11 May 2025 in cs.SE, cs.AI, and cs.LG | (2505.07058v1)

Abstract: AI is rapidly expanding and integrating more into daily life to automate tasks, guide decision making, and enhance efficiency. However, complex AI models, which make decisions without providing clear explanations (known as the "black-box problem"), currently restrict trust and widespread adoption of AI. Explainable Artificial Intelligence (XAI) has emerged to address the black-box problem of making AI systems more interpretable and transparent so stakeholders can trust, verify, and act upon AI-based outcomes. Researchers have developed various techniques to foster XAI in the Software Development Lifecycle. However, there are gaps in applying XAI techniques in the Software Engineering phases. Literature review shows that 68% of XAI in Software Engineering research is focused on maintenance as opposed to 8% on software management and requirements. In this paper, we present a comprehensive survey of the applications of XAI methods such as concept-based explanations, Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), rule extraction, attention mechanisms, counterfactual explanations, and example-based explanations to the different phases of the Software Development Life Cycle (SDLC), including requirements elicitation, design and development, testing and deployment, and evolution. To the best of our knowledge, this paper presents the first comprehensive survey of XAI techniques for every phase of the Software Development Life Cycle (SDLC). This survey aims to promote explainable AI in Software Engineering and facilitate the practical application of complex AI models in AI-driven software development.

Summary

Overview of Explainable Artificial Intelligence Techniques for the Software Development Lifecycle: A Phase-specific Survey

The paper "Explainable Artificial Intelligence Techniques for Software Development Lifecycle: A Phase-specific Survey" thoroughly investigates the integration of Explainable AI (XAI) within Software Engineering. It addresses the pervasive "black-box" problem associated with AI, emphasizing the necessity for adopting XAI to enhance transparency and trust in AI systems.

Key Findings and Numerical Insights

This survey delineates the application of several XAI methods, including concept-based explanations, LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), rule extraction, and counterfactual explanations, across various phases of the Software Development Lifecycle (SDLC). The paper identifies the disparities in XAI research focus, revealing that 68% of research targets the maintenance phase, whereas only 8% is dedicated to management and requirements. Furthermore, it underscores that a uniform application of XAI is insufficient due to phase-specific demands in software engineering.

Methodological Approach

To structure its insights, the research employed a mixed-methods approach combining a systematic literature review (SLR) with a narrative literature review, enabling a comprehensive assessment of current XAI research gaps and techniques. This approach allowed the researchers to categorize the intricacies of XAI application across SDLC phases, each with distinct challenges and applicable XAI methods.

Phase-specific XAI Applications

For each phase of SDLC, the paper systematically outlines the following:

  • Requirement Elicitation: Challenges such as ambiguity and incompleteness warrant attention. Here, LIME and SHAP are suggested for revealing influential features, while counterfactual explanations aid in illustrating input-output sensitivities.

  • Design: XAI techniques, including rule extraction and concept-based explanations, help explain AI recommendations on system architecture, emphasizing the rationale behind selected design patterns and associated trade-offs.

  • Development: LIME and SHAP serve to elucidate AI-generated code suggestions, enhancing the understanding of correctness and reliability in generated outputs.

  • Testing: Feature attribution methods like SHAP assist in uncovering the rationale for test outcomes, while counterfactuals propose adjustments to test cases to achieve alternative results.

  • Deployment and Monitoring: These phases benefit from LIME and SHAP's ability to indicate performance anomalies, supporting informed responses to AI-driven insights.

  • Maintenance and Evolution: Here, attention mechanisms and example-based explanations enhance understanding of AI-predicted bug fixes and code refactoring proposals.

Implications and Future Directions

This survey contributes a critical discourse on the importance of adopting XAI throughout the software development lifecycle to promote transparent and reliable AI adoption in software engineering. It also highlights the lack of standard evaluation metrics for XAI methods in this domain, underscoring the need for benchmarking structures that facilitate consistent assessment.

The paper points toward prospective research avenues, particularly in the integration of XAI within agile and DevOps environments and the development of standardized XAI evaluation frameworks. Such endeavors are crucial to fostering a robust ecosystem for ethical and transparent AI-driven software development.

In summary, the thorough articulation of XAI techniques across SDLC phases provided in this paper serves as a seminal reference point for enhancing explainability in AI-aided software engineering, advocating a tailored approach per phase to address distinct challenges and foster trustworthiness in AI systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 1 like about this paper.