Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Attribution Regularization (1905.09957v3)

Published 23 May 2019 in cs.LG, cs.AI, cs.CR, and stat.ML

Abstract: An emerging problem in trustworthy machine learning is to train models that produce robust interpretations for their predictions. We take a step towards solving this problem through the lens of axiomatic attribution of neural networks. Our theory is grounded in the recent work, Integrated Gradients (IG), in axiomatically attributing a neural network's output change to its input change. We propose training objectives in classic robust optimization models to achieve robust IG attributions. Our objectives give principled generalizations of previous objectives designed for robust predictions, and they naturally degenerate to classic soft-margin training for one-layer neural networks. We also generalize previous theory and prove that the objectives for different robust optimization models are closely related. Experiments demonstrate the effectiveness of our method, and also point to intriguing problems which hint at the need for better optimization techniques or better neural network architectures for robust attribution training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiefeng Chen (26 papers)
  2. Xi Wu (100 papers)
  3. Vaibhav Rastogi (11 papers)
  4. Yingyu Liang (107 papers)
  5. Somesh Jha (112 papers)
Citations (78)

Summary

We haven't generated a summary for this paper yet.