Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global Saliency: Aggregating Saliency Maps to Assess Dataset Artefact Bias (1910.07604v2)

Published 16 Oct 2019 in cs.CV and cs.LG

Abstract: In high-stakes applications of machine learning models, interpretability methods provide guarantees that models are right for the right reasons. In medical imaging, saliency maps have become the standard tool for determining whether a neural model has learned relevant robust features, rather than artefactual noise. However, saliency maps are limited to local model explanation because they interpret predictions on an image-by-image basis. We propose aggregating saliency globally, using semantic segmentation masks, to provide quantitative measures of model bias across a dataset. To evaluate global saliency methods, we propose two metrics for quantifying the validity of saliency explanations. We apply the global saliency method to skin lesion diagnosis to determine the effect of artefacts, such as ink, on model bias.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jacob Pfau (10 papers)
  2. Albert T. Young (2 papers)
  3. Maria L. Wei (3 papers)
  4. Michael J. Keiser (7 papers)
Citations (13)