Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks (2006.09000v1)

Published 16 Jun 2020 in cs.LG, cs.AI, cs.CV, and stat.ML

Abstract: Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e.g. safety-critical areas. So far, however, no methods for quantifying uncertainties of explanations have been conceived, which is problematic in domains where a high confidence in explanations is a prerequisite. We therefore contribute by proposing a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks, with an in-built modeling of uncertainties. Within the Bayesian framework a network's weights follow a distribution that extends standard single explanation scores and heatmaps to distributions thereof, in this manner translating the intrinsic network model uncertainties into a quantification of explanation uncertainties. This allows us for the first time to carve out uncertainties associated with a model explanation and subsequently gauge the appropriate level of explanation confidence for a user (using percentiles). We demonstrate the effectiveness and usefulness of our approach extensively in various experiments, both qualitatively and quantitatively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kirill Bykov (11 papers)
  2. Marina M. -C. Höhne (22 papers)
  3. Klaus-Robert Müller (167 papers)
  4. Shinichi Nakajima (44 papers)
  5. Marius Kloft (65 papers)
Citations (29)
X Twitter Logo Streamline Icon: https://streamlinehq.com