Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ViT-CX: Causal Explanation of Vision Transformers (2211.03064v3)

Published 6 Nov 2022 in cs.CV and cs.AI

Abstract: Despite the popularity of Vision Transformers (ViTs) and eXplainable AI (XAI), only a few explanation methods have been designed specially for ViTs thus far. They mostly use attention weights of the [CLS] token on patch embeddings and often produce unsatisfactory saliency maps. This paper proposes a novel method for explaining ViTs called ViT-CX. It is based on patch embeddings, rather than attentions paid to them, and their causal impacts on the model output. Other characteristics of ViTs such as causal overdetermination are also considered in the design of ViT-CX. The empirical results show that ViT-CX produces more meaningful saliency maps and does a better job revealing all important evidence for the predictions than previous methods. The explanation generated by ViT-CX also shows significantly better faithfulness to the model. The codes and appendix are available at https://github.com/vaynexie/CausalX-ViT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Weiyan Xie (6 papers)
  2. Xiao-Hui Li (12 papers)
  3. Caleb Chen Cao (13 papers)
  4. Nevin L. Zhang (44 papers)
Citations (13)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub