Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena (2206.05487v3)

Published 11 Jun 2022 in stat.ML and cs.LG

Abstract: To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern ML models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods-termed 'property descriptors' -- that illuminate not just the model, but also the phenomenon it represents. We demonstrate that property descriptors, grounded in statistical learning theory, can effectively reveal relevant properties of the joint probability distribution of the observational data. We identify existing IML methods suited for scientific inference and provide a guide for developing new descriptors with quantified epistemic uncertainty. Our framework empowers scientists to harness ML models for inference, and provides directions for future IML research to support scientific understanding.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Timo Freiesleben (11 papers)
  2. Gunnar König (14 papers)
  3. Christoph Molnar (11 papers)
  4. Alvaro Tejero-Cantero (29 papers)
Citations (18)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets