Manifold-based Shapley for SAR Recognization Network Explanation (2401.03128v1)
Abstract: Explainable artificial intelligence (XAI) holds immense significance in enhancing the deep neural network's transparency and credibility, particularly in some risky and high-cost scenarios, like synthetic aperture radar (SAR). Shapley is a game-based explanation technique with robust mathematical foundations. However, Shapley assumes that model's features are independent, rendering Shapley explanation invalid for high dimensional models. This study introduces a manifold-based Shapley method by projecting high-dimensional features into low-dimensional manifold features and subsequently obtaining Fusion-Shap, which aims at (1) addressing the issue of erroneous explanations encountered by traditional Shap; (2) resolving the challenge of interpretability that traditional Shap faces in complex scenarios.
- “Analytical interpretation of the gap of cnn’s cognition between sar and optical target recognition,” Neural Networks, vol. 165, pp. 982–986, 2023.
- Lloyd S Shapley et al., “A value for n-person games,” 1953.
- “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
- “Algorithms to estimate shapley value feature attributions,” Nature Machine Intelligence, pp. 1–12, 2023.
- “Explaining image classifiers by counterfactual generation,” in International Conference on Learning Representations, 2018.
- “Fairwashing explanations with off-manifold detergent,” in International Conference on Machine Learning. PMLR, 2020, pp. 314–323.
- “Explaining individual predictions when features are dependent: More accurate approximations to shapley values,” Artificial Intelligence, vol. 298, pp. 103502, 2021.
- “Shapley explainability on the data manifold,” in International Conference on Learning Representations, 2020.
- “Weightedshap: analyzing and improving shapley based feature attributions,” Advances in Neural Information Processing Systems, vol. 35, pp. 34363–34376, 2022.
- “Counterfactual shapley additive explanations,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 1054–1070.
- “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
- “Umap: Uniform manifold approximation and projection,” The Journal of Open Source Software, vol. 3, no. 29, pp. 861, 2018.
- “Analyzing and improving the image quality of StyleGAN,” in Proc. CVPR, 2020.
- “Image2stylegan: How to embed images into the stylegan latent space?,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 4432–4441.
- “Understanding and unifying fourteen attribution methods with taylor interactions,” arXiv preprint arXiv:2303.01506, 2023.
- “On the (in) fidelity and sensitivity of explanations,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
- “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, pp. e0130140, 2015.
- “Axiomatic attribution for deep networks,” in International conference on machine learning. PMLR, 2017, pp. 3319–3328.
- “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.
- “Learning important features through propagating activation differences,” in International conference on machine learning. PMLR, 2017, pp. 3145–3153.