Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Deep Classification Models for Domain Generalization (2003.06498v1)

Published 13 Mar 2020 in cs.CV

Abstract: Conventionally, AI models are thought to trade off explainability for lower accuracy. We develop a training strategy that not only leads to a more explainable AI system for object classification, but as a consequence, suffers no perceptible accuracy degradation. Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision. This is represented in the form of a saliency map conveying how much each pixel contributed to the network's decision. Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object. We quantify explainability using an automated metric, and using human judgement. We propose explainability as a means for bridging the visual-semantic gap between different domains where model explanations are used as a means of disentagling domain specific information from otherwise relevant features. We demonstrate that this leads to improved generalization to new domains without hindering performance on the original domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Andrea Zunino (17 papers)
  2. Sarah Adel Bargal (29 papers)
  3. Riccardo Volpi (30 papers)
  4. Mehrnoosh Sameki (6 papers)
  5. Jianming Zhang (85 papers)
  6. Stan Sclaroff (56 papers)
  7. Vittorio Murino (66 papers)
  8. Kate Saenko (178 papers)
Citations (37)