Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantifying Model Complexity via Functional Decomposition for Better Post-Hoc Interpretability (1904.03867v2)

Published 8 Apr 2019 in stat.ML and cs.LG

Abstract: Post-hoc model-agnostic interpretation methods such as partial dependence plots can be employed to interpret complex machine learning models. While these interpretation methods can be applied regardless of model complexity, they can produce misleading and verbose results if the model is too complex, especially w.r.t. feature interactions. To quantify the complexity of arbitrary machine learning models, we propose model-agnostic complexity measures based on functional decomposition: number of features used, interaction strength and main effect complexity. We show that post-hoc interpretation of models that minimize the three measures is more reliable and compact. Furthermore, we demonstrate the application of these measures in a multi-objective optimization approach which simultaneously minimizes loss and complexity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Christoph Molnar (11 papers)
  2. Giuseppe Casalicchio (34 papers)
  3. Bernd Bischl (136 papers)
Citations (58)

Summary

We haven't generated a summary for this paper yet.