Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models (2505.10515v1)

Published 15 May 2025 in cs.LG and cs.AI

Abstract: Recently, post hoc explanation methods have emerged to enhance model transparency by attributing model outputs to input features. However, these methods face challenges due to their specificity to certain neural network architectures and data modalities. Existing explainable artificial intelligence (XAI) frameworks have attempted to address these challenges but suffer from several limitations. These include limited flexibility to diverse model architectures and data modalities due to hard-coded implementations, a restricted number of supported XAI methods because of the requirements for layer-specific operations of attribution methods, and sub-optimal recommendations of explanations due to the lack of evaluation and optimization phases. Consequently, these limitations impede the adoption of XAI technology in real-world applications, making it difficult for practitioners to select the optimal explanation method for their domain. To address these limitations, we introduce \textbf{PnPXAI}, a universal XAI framework that supports diverse data modalities and neural network models in a Plug-and-Play (PnP) manner. PnPXAI automatically detects model architectures, recommends applicable explanation methods, and optimizes hyperparameters for optimal explanations. We validate the framework's effectiveness through user surveys and showcase its versatility across various domains, including medicine and finance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Seongun Kim (5 papers)
  2. Sol A Kim (1 paper)
  3. Geonhyeong Kim (1 paper)
  4. Enver Menadjiev (1 paper)
  5. Chanwoo Lee (10 papers)
  6. Seongwook Chung (2 papers)
  7. Nari Kim (1 paper)
  8. Jaesik Choi (66 papers)

Summary

PnPXAI: A Universal XAI Framework for Diverse Modalities and Models

The paper introduces PnPXAI, a universal framework for explainable artificial intelligence (XAI) offering a Plug-and-Play (PnP) approach to automatically generate model explanations. Unlike traditional XAI frameworks that are limited by fixed architectures and specific modalities, PnPXAI enables flexibility in accommodating diverse neural network models and data modalities, providing automatic explanations without requiring substantial user intervention.

Background and Challenges

Post hoc explanation methods serve an essential role in enhancing the interpretability and trustworthiness of complex models by attributing outputs to respective input features. Various methods, such as gradient-based techniques, relevance propagation, and model-agnostic approaches, support these objectives. However, the implementation of these methods is often restricted to specific neural architectures or data types, impacting their applicability and scalability. Additionally, challenges include the limited support for diverse XAI methods due to the need for layer-specific operations and the absence of optimization stages for recommendation. These limitations hinder the practical adoption of XAI strategies in real-world applications, such as healthcare and finance.

PnPXAI Framework

PnPXAI is constructed to overcome these constraints. It incorporates dynamic modules that systematically address the restrictions in existing frameworks:

  • Detector Module: This component automatically identifies the structure of the given neural network model, forming the basis for the selection of appropriate explanation methodologies.
  • Recommender Module: Utilizing the model's detected architecture and data modalities, this module filters and suggests applicable explanation methods using a structured mapping table.
  • Explainer Module: This extensive pool includes both model-specific and model-agnostic methods, aiming for a comprehensive approach to generating model explanations. Notable methods include LIME, SHAP, and Integrated Gradients.
  • Evaluator and Optimizer Modules: These modules evaluate the plausibility of explanations using quantitative metrics, not reliant on subjective human assessments. Hyperparameter optimization ensures the generation of precise and effective explanations.

Evaluation and Use Cases

The framework's robustness is validated by practical demonstrations across various domains. In medical applications, such as liver tumor detection from CT images, the framework has adeptly offered precise attribution maps, correlating well with ground truth. Similarly, in detecting acute kidney injury (AKI), it identified key biomedical markers, confirmatory of the model's predictions and demonstrating alignment with established scientific indicators.

Further, the use of PnPXAI for fraud detection in finance illustrates its user-friendliness and practical utility. Developers can easily integrate and deploy explanations for models without deep XAI expertise, enhancing end-user accessibility.

User Survey and Feedback

A user survey involving machine learning practitioners ascertained the framework's efficacy, highlighting high user satisfaction, especially with components responsible for automatic detection and suggestion of XAI methods. The survey also noted the optimization feature's impact on providing reliable, trustworthy explanations.

Implications and Future Developments

PnPXAI sets the stage for enhancing transparency in AI models. By simplifying the deployment of diverse explanation methods and automating the evaluation of their effectiveness, it significantly contributes to bridging the gap between complex AI systems and user interpretability. Moving forward, expanding the framework's applicability to include explanations for LLMs represents a promising trajectory, potentially involving collaborative enhancements from the broader AI research community.

In conclusion, PnPXAI is a significant advancement in the field of XAI, offering a flexible, scalable, and user-centric solution for generating automatic model explanations. This framework holds substantial potential for advancing interpretability in machine learning, paving the way for greater trust and adoption of AI technologies across disciplines.

Youtube Logo Streamline Icon: https://streamlinehq.com