Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees (1906.01297v1)

Published 4 Jun 2019 in stat.ML and cs.LG

Abstract: Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables. We propose a model-agnostic interpretable surrogate that provides global and local explanations of black-box classifiers to address this issue. We introduce the idea of concepts as intuitive groupings of variables that are either defined by a domain expert or automatically discovered using correlation coefficients. Concepts are embedded in a surrogate decision tree to enhance its comprehensibility. First experiments on FRED-MD, a macroeconomic database with 134 variables, show improvement in human-interpretability while accuracy and fidelity of the surrogate model are preserved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xavier Renard (14 papers)
  2. Nicolas Woloszko (1 paper)
  3. Jonathan Aigrain (3 papers)
  4. Marcin Detyniecki (41 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.