Papers
Topics
Authors
Recent
Search
2000 character limit reached

Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network

Published 5 Sep 2020 in cs.CV and cs.AI | (2009.02516v2)

Abstract: The practical application of deep neural networks are still limited by their lack of transparency. One of the efforts to provide explanation for decisions made by AI is the use of saliency or heat maps highlighting relevant regions that contribute significantly to its prediction. A layer-wise amplitude filtering method was previously introduced to improve the quality of heatmaps, performing error corrections by noise-spike suppression. In this study, we generalize the layerwise error correction by considering any identifiable error and assuming there exists a groundtruth interpretable information. The forms of errors propagated through layerwise relevance methods are studied and we propose a filtering technique for interpretability signal rectification taylored to the trend of signal amplitude of the particular neural network used. Finally, we put forth arguments for the use of groundtruth interpretable information.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.