Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MFABA: A More Faithful and Accelerated Boundary-based Attribution Method for Deep Neural Networks (2312.13630v1)

Published 21 Dec 2023 in cs.CV and cs.LG

Abstract: To better understand the output of deep neural networks (DNN), attribution based methods have been an important approach for model interpretability, which assign a score for each input dimension to indicate its importance towards the model outcome. Notably, the attribution methods use the axioms of sensitivity and implementation invariance to ensure the validity and reliability of attribution results. Yet, the existing attribution methods present challenges for effective interpretation and efficient computation. In this work, we introduce MFABA, an attribution algorithm that adheres to axioms, as a novel method for interpreting DNN. Additionally, we provide the theoretical proof and in-depth analysis for MFABA algorithm, and conduct a large scale experiment. The results demonstrate its superiority by achieving over 101.5142 times faster speed than the state-of-the-art attribution algorithms. The effectiveness of MFABA is thoroughly evaluated through the statistical analysis in comparison to other methods, and the full implementation package is open-source at: https://github.com/LMBTough/MFABA

Definition Search Book Streamline Icon: https://streamlinehq.com
References (32)
  1. Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104.
  2. Interpreting neural ranking models using grad-cam. arXiv preprint arXiv:2005.05768.
  3. Improving performance of deep learning models with axiomatic attribution priors and expected gradients. Nature machine intelligence, 3(7): 620–631.
  4. Explaining and harnessing adversarial examples. arXiv [Preprint](2014). 10.48550. arXiv preprint arXiv:1412.6572.
  5. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  6. Interpreting black-box semantic segmentation models in remote sensing applications.
  7. Distilled gradient aggregation: Purify features for input attribution in the deep neural network. Advances in Neural Information Processing Systems, 35: 26478–26491.
  8. DANAA: Towards transferable attacks with double adversarial neuron attribution. In International Conference on Advanced Data Mining and Applications, 456–470. Springer.
  9. Xrai: Better attributions through regions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4948–4957.
  10. Guided integrated gradients: An adaptive path method for removing noise. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5050–5058.
  11. Cifar-10 (canadian institute for advanced research). URL http://www. cs. toronto. edu/kriz/cifar. html, 5(4): 1.
  12. Adversarial examples in the physical world. In Artificial intelligence safety and security, 99–112. Chapman and Hall/CRC.
  13. Li, Y. 2022. Research and application of deep learning in image recognition. In 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), 994–999. IEEE.
  14. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
  15. Deep learning–based text classification: a comprehensive review. ACM Computing Surveys (CSUR), 54(3): 1–40.
  16. Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing, 493: 626–646.
  17. Explaining Deep Neural Network Models with Adversarial Gradient Integration. In IJCAI, 2876–2883.
  18. Incremental learning of fetal heart anatomies using interpretable saliency maps. In Annual Conference on Medical Image Understanding and Analysis, 129–141. Springer.
  19. Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421.
  20. Ramaswamy, H. G.; et al. 2020. Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 983–991.
  21. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144.
  22. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211–252.
  23. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618–626.
  24. Learning important features through propagating activation differences. In International conference on machine learning, 3145–3153. PMLR.
  25. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
  26. Full-gradient representation for neural network visualization. Advances in neural information processing systems, 32.
  27. Axiomatic attribution for deep networks. In International conference on machine learning, 3319–3328. PMLR.
  28. Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 24–25.
  29. Robust models are more interpretable because attributions look normal. arXiv preprint arXiv:2103.11257.
  30. A survey of modern deep learning based object detection models. Digital Signal Processing, 103514.
  31. Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1): 27–39.
  32. An improved Elman neural network with piecewise weighted gradient for time series prediction. Neurocomputing, 359: 199–208.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhiyu Zhu (40 papers)
  2. Huaming Chen (38 papers)
  3. Jiayu Zhang (29 papers)
  4. Xinyi Wang (152 papers)
  5. Zhibo Jin (14 papers)
  6. Minhui Xue (72 papers)
  7. Dongxiao Zhu (41 papers)
  8. Kim-Kwang Raymond Choo (59 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com