Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining a black-box using Deep Variational Information Bottleneck Approach (1902.06918v2)

Published 19 Feb 2019 in cs.LG and stat.ML

Abstract: Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Seojin Bang (7 papers)
  2. Pengtao Xie (86 papers)
  3. Heewook Lee (2 papers)
  4. Wei Wu (482 papers)
  5. Eric Xing (127 papers)
Citations (67)

Summary

We haven't generated a summary for this paper yet.