Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Noise Modulation: Let Your Model Interpret Itself (2103.10603v1)

Published 19 Mar 2021 in cs.LG and cs.CV

Abstract: Given the great success of Deep Neural Networks(DNNs) and the black-box nature of it,the interpretability of these models becomes an important issue.The majority of previous research works on the post-hoc interpretation of a trained model.But recently, adversarial training shows that it is possible for a model to have an interpretable input-gradient through training.However,adversarial training lacks efficiency for interpretability.To resolve this problem, we construct an approximation of the adversarial perturbations and discover a connection between adversarial training and amplitude modulation. Based on a digital analogy,we propose noise modulation as an efficient and model-agnostic alternative to train a model that interprets itself with input-gradients.Experiment results show that noise modulation can effectively increase the interpretability of input-gradients model-agnosticly.

Summary

We haven't generated a summary for this paper yet.