Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explingo: Explaining AI Predictions using Large Language Models (2412.05145v1)

Published 6 Dec 2024 in cs.CL, cs.AI, and cs.LG
Explingo: Explaining AI Predictions using Large Language Models

Abstract: Explanations of ML model predictions generated by Explainable AI (XAI) techniques such as SHAP are essential for people using ML outputs for decision-making. We explore the potential of LLMs to transform these explanations into human-readable, narrative formats that align with natural communication. We address two key research questions: (1) Can LLMs reliably transform traditional explanations into high-quality narratives? and (2) How can we effectively evaluate the quality of narrative explanations? To answer these questions, we introduce Explingo, which consists of two LLM-based subsystems, a Narrator and Grader. The Narrator takes in ML explanations and transforms them into natural-language descriptions. The Grader scores these narratives on a set of metrics including accuracy, completeness, fluency, and conciseness. Our experiments demonstrate that LLMs can generate high-quality narratives that achieve high scores across all metrics, particularly when guided by a small number of human-labeled and bootstrapped examples. We also identified areas that remain challenging, in particular for effectively scoring narratives in complex domains. The findings from this work have been integrated into an open-source tool that makes narrative explanations available for further applications.

An Analysis of Explingo: Leveraging LLMs for Enhanced Interpretability of ML Predictions

The paper "Explingo: Explaining AI Predictions using LLMs" introduces an innovative approach to improving the interpretability of machine learning models by transforming traditional Explainable AI (XAI) outputs into human-readable narratives. Key to this approach is the utilization of LLMs, which convert explanations generated by XAI techniques, such as SHAP, into natural language descriptions. This transformation aligns explanations with how humans naturally communicate, making machine learning outputs more accessible and understandable to users.

Core Contributions

The primary innovation presented in this work involves the two-part Explingo system, consisting of the NARRATOR and the GRADER.

  1. NARRATOR: This component utilizes LLMs to convert structured ML explanations into narrative formats. The authors strongly focus on SHAP features and systematically guide the LLMs through carefully designed prompts and exemplar dataset inputs. This enables them to produce coherent narratives that retain the informational integrity of original ML outputs.
  2. GRADER: This subsystem evaluates the quality of generated narratives based on predefined metrics, including accuracy, completeness, fluency, and conciseness. Utilizing LLM-assisted evaluation, this component automates the grading process, providing consistent assessments while reducing reliance on human evaluators.

Research Questions and Methodology

The paper addresses two pivotal research questions:

  • Can LLMs effectively transform traditional ML explanations into high-quality narratives?
  • How should the quality of these narrative explanations be evaluated?

To answer these questions, the authors developed a rigorous experimental framework involving diverse datasets and carefully curated exemplar datasets to guide narrative generation. Specific LLM prompting techniques and few-shot strategies were employed to fine-tune the generation process, with experiments demonstrating a systematic approach to optimizing narrative quality across different domains.

Experimental Findings

The experimental outcomes indicate that LLMs, when properly guided, can reliably generate high-quality explanatory narratives. The paper highlights a balance between using hand-written and bootstrapped few-shot exemplars to enhance narrative style and quality while maintaining robust correctness. Though adding exemplars generally improved narrative fluency and conciseness, it occasionally introduced complexity that impacted accuracy, underscoring the importance of careful prompt design and exemplar selection.

Implications and Future Directions

This work has significant implications for the field of interpretable AI and model transparency. The Explingo system, by integrating easily interpretable narratives into ML workflows, extends the usability of complex models in domains like healthcare, finance, and law, where decision-making requires understanding intricate model predictions.

However, the paper also identifies challenges, notably with the automatic grading system's limitations in complex narrative interpretations, such as terms involving comparative analysis without sufficient context. These challenges suggest avenues for further research, such as enhancing contextual awareness in narrative generation and evaluating the practical application of narratives in real-world decision-making.

In conclusion, the Explingo framework represents a constructive step toward more interpretable AI systems, leveraging the advanced capabilities of LLMs to meet the needs of human-centered explanations. Future research targeting system refinements, user studies to validate narrative effectiveness, and extensions to other explanation types will further solidify these innovations' impact on enhancing model interpretability and user trust in AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Alexandra Zytek (10 papers)
  2. Sara Pido (1 paper)
  3. Sarah Alnegheimish (13 papers)
  4. Laure Berti-Equille (19 papers)
  5. Kalyan Veeramachaneni (38 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com