Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs (2004.11440v2)

Published 23 Apr 2020 in cs.HC, cs.CY, and cs.LG

Abstract: As the use of ML models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.

Insights on Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs

The paper "Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs" presents a comprehensive empirical paper focused on the practical aspects of model interpretability in industry, particularly emphasizing the human factors involved. As the prevalence of ML models in various sectors increases, understanding model interpretability becomes critical, especially when these models are integrated into data-driven decision-making processes. The paper offers insights from 22 semi-structured interviews with industry professionals who design, develop, and implement ML models.

Interpretability Roles, Stages, and Goals

The authors categorize the stakeholders involved in interpretability work into three main roles: Model Builders, Model Breakers, and Model Consumers. Model Builders, typically data scientists and engineers, are responsible for creating and validating models. Model Breakers, including domain experts and product managers, offer crucial feedback to ensure models meet real-world needs. Model Consumers are the end-users who rely on the model outputs for decision-making.

The paper delineates interpretability stages spanning Ideation and Conceptualization, Building and Validation, and Deployment and Maintenance. Initially, interpretability concerns guide feature engineering and model conceptualization. During model building, the focus shifts to validating models and understanding feature and instance behavior, often using interpretability tools like LIME and SHAP. In later stages, monitoring and providing explanations for model behavior to end-users are paramount.

Themes in Interpretability

Several themes emerge from the paper, offering a nuanced understanding of interpretability as practiced in industry:

  • Interpretability as Cooperative: The process involves collaboration across roles to align models with organizational and stakeholder needs, emphasizing the importance of communication and trust-building.
  • Interpretability as Process: Rather than a static property, interpretability is perceived as an ongoing concern that evolves throughout a model's lifecycle, necessitating continuous evaluation and adaptation.
  • Interpretability as Mental Model Comparison: The paper highlights the importance of comparing human mental models and ML models, focusing on aligning and resolving discrepancies to ensure model outputs are meaningful and actionable.
  • Interpretability as Context-Dependent: The practical needs and constraints of interpretability vary based on the specific domain and use-case, necessitating tailored approaches to facilitate understanding and trust in model outputs.

Implications for Design and Future Research

The paper underscores several design opportunities to address existing interpretability challenges. Developing tools that better identify and integrate human expectations and improve model communication and summarization could significantly enhance interpretability. Furthermore, scalable and integratable tools, particularly for model comparison and post-deployment monitoring, are critical for practical application.

The paper prompts further exploration into the cooperative and contextual nature of interpretability work. It highlights the necessity for research that bridges the gap between academic insights and the real-world challenges faced by practitioners, focusing on the socio-technical aspects of model interpretability.

Overall, this paper provides valuable perspectives on the intricacies of model interpretability in industry settings, advocating for a paradigm shift in how interpretability is understood and enacted. By associating interpretability with human factors and collaborative processes, the paper suggests innovative paths for enhancing the design and implementation of interpretable ML models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sungsoo Ray Hong (10 papers)
  2. Jessica Hullman (46 papers)
  3. Enrico Bertini (23 papers)
Citations (180)