Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Machine Learning in Deployment (1909.06342v4)

Published 13 Sep 2019 in cs.LG, cs.AI, cs.CY, cs.HC, and stat.ML

Abstract: Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that, currently, the majority of deployments are not for end users affected by the model but rather for machine learning engineers, who use explainability to debug the model itself. There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations of current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability. We end by discussing concerns raised regarding explainability.

Explainable Machine Learning in Deployment: An Overview

The paper "Explainable Machine Learning in Deployment" by Bhatt et al. provides an in-depth exploration of the practical deployment of explainable ML techniques in various organizational contexts. It aims to bridge the gap between explainability methodologies developed in academic settings and their real-world applications. The research focuses on understanding how explainability is viewed and utilized by organizations, particularly emphasizing the difference between internal and external stakeholders.

Key Findings and Techniques

The paper's primary focus is on local explainability methods, highlighting their practical deployment across different industries. It identifies a pronounced gap between the theoretical promise of explainability and its actual application, which tends to cater more to internal stakeholders, like ML engineers, rather than external end users. The paper identifies several key methodologies, each with specific insights:

  1. Feature Importance: This is the most commonly deployed explainability technique. It often uses Shapley values to provide insights into the significance of individual features in the prediction process. The authors note that while this technique is widely used to sanity-check model outputs, it rarely serves to explain predictions to end users.
  2. Counterfactual Explanations: These explanations help understand model outputs by identifying minimal changes to input data that would change the model's prediction. The paper identifies these as potential tools for providing recourse, although practical deployment faces challenges due to computational constraints and the need for plausibility.
  3. Adversarial Training: This method improves model robustness and explainability by focusing on features that are consistent across adversarial examples. The paper remarks on the surprising correlation between robustness and interpretability, offering insights into improving ML model reliability.
  4. Influential Samples: Techniques like influence functions attempt to identify which training data points most affect a given prediction. Despite theoretical interest, practical deployment is limited due to computational and interpretational challenges, particularly in handling outliers.

Methodology

The paper synthesizes insights from approximately fifty interviews with stakeholders from around thirty organizations, including both non-profit and for-profit entities. It categorizes stakeholders into several groups – executives, ML engineers, end users, and other stakeholders – to dissect their specific needs and engagements with explainability.

Implications and Recommendations

The paper suggests that many organizations still rely heavily on domain experts to filter and validate explanations, indicating a mismatch between technical capabilities and practical needs. It stresses the importance of setting clear goals for explainability, recommending a structured framework to establish stakeholder-specific desiderata.

Furthermore, the paper raises concerns about explainability, such as privacy issues, the challenge of ensuring causal rather than correlative explanations, and the dual-use nature of improved model understanding, which can also empower malicious applications.

Future Directions

The research highlights several avenues for future work, emphasizing the need for:

  1. Causal Explanations: Developing methods that provide causal insights as opposed to merely correlative ones.
  2. Scalable Solutions: Addressing computational inefficiencies to enable real-time explainability.
  3. Framework Development: Creating frameworks that align explanation techniques with specific organizational goals and contexts.
  4. Regulatory and Ethical Considerations: Navigating the evolving legal landscape regarding explainability mandates.

This paper provides a comprehensive view that not only documents current practices but also stimulates further research and technological advancements in the field of explainable AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Umang Bhatt (42 papers)
  2. Alice Xiang (28 papers)
  3. Shubham Sharma (51 papers)
  4. Adrian Weller (150 papers)
  5. Ankur Taly (22 papers)
  6. Yunhan Jia (5 papers)
  7. Joydeep Ghosh (74 papers)
  8. Ruchir Puri (17 papers)
  9. José M. F. Moura (118 papers)
  10. Peter Eckersley (7 papers)
Citations (536)