Insights on Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
The paper "Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs" presents a comprehensive empirical paper focused on the practical aspects of model interpretability in industry, particularly emphasizing the human factors involved. As the prevalence of ML models in various sectors increases, understanding model interpretability becomes critical, especially when these models are integrated into data-driven decision-making processes. The paper offers insights from 22 semi-structured interviews with industry professionals who design, develop, and implement ML models.
Interpretability Roles, Stages, and Goals
The authors categorize the stakeholders involved in interpretability work into three main roles: Model Builders, Model Breakers, and Model Consumers. Model Builders, typically data scientists and engineers, are responsible for creating and validating models. Model Breakers, including domain experts and product managers, offer crucial feedback to ensure models meet real-world needs. Model Consumers are the end-users who rely on the model outputs for decision-making.
The paper delineates interpretability stages spanning Ideation and Conceptualization, Building and Validation, and Deployment and Maintenance. Initially, interpretability concerns guide feature engineering and model conceptualization. During model building, the focus shifts to validating models and understanding feature and instance behavior, often using interpretability tools like LIME and SHAP. In later stages, monitoring and providing explanations for model behavior to end-users are paramount.
Themes in Interpretability
Several themes emerge from the paper, offering a nuanced understanding of interpretability as practiced in industry:
- Interpretability as Cooperative: The process involves collaboration across roles to align models with organizational and stakeholder needs, emphasizing the importance of communication and trust-building.
- Interpretability as Process: Rather than a static property, interpretability is perceived as an ongoing concern that evolves throughout a model's lifecycle, necessitating continuous evaluation and adaptation.
- Interpretability as Mental Model Comparison: The paper highlights the importance of comparing human mental models and ML models, focusing on aligning and resolving discrepancies to ensure model outputs are meaningful and actionable.
- Interpretability as Context-Dependent: The practical needs and constraints of interpretability vary based on the specific domain and use-case, necessitating tailored approaches to facilitate understanding and trust in model outputs.
Implications for Design and Future Research
The paper underscores several design opportunities to address existing interpretability challenges. Developing tools that better identify and integrate human expectations and improve model communication and summarization could significantly enhance interpretability. Furthermore, scalable and integratable tools, particularly for model comparison and post-deployment monitoring, are critical for practical application.
The paper prompts further exploration into the cooperative and contextual nature of interpretability work. It highlights the necessity for research that bridges the gap between academic insights and the real-world challenges faced by practitioners, focusing on the socio-technical aspects of model interpretability.
Overall, this paper provides valuable perspectives on the intricacies of model interpretability in industry settings, advocating for a paradigm shift in how interpretability is understood and enacted. By associating interpretability with human factors and collaborative processes, the paper suggests innovative paths for enhancing the design and implementation of interpretable ML models.