Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How Does Attention Work in Vision Transformers? A Visual Analytics Attempt (2303.13731v1)

Published 24 Mar 2023 in cs.LG, cs.CV, and cs.HC

Abstract: Vision transformer (ViT) expands the success of transformer models from sequential data to images. The model decomposes an image into many smaller patches and arranges them into a sequence. Multi-head self-attentions are then applied to the sequence to learn the attention between patches. Despite many successful interpretations of transformers on sequential data, little effort has been devoted to the interpretation of ViTs, and many questions remain unanswered. For example, among the numerous attention heads, which one is more important? How strong are individual patches attending to their spatial neighbors in different heads? What attention patterns have individual heads learned? In this work, we answer these questions through a visual analytics approach. Specifically, we first identify what heads are more important in ViTs by introducing multiple pruning-based metrics. Then, we profile the spatial distribution of attention strengths between patches inside individual heads, as well as the trend of attention strengths across attention layers. Third, using an autoencoder-based learning solution, we summarize all possible attention patterns that individual heads could learn. Examining the attention strengths and patterns of the important heads, we answer why they are important. Through concrete case studies with experienced deep learning experts on multiple ViTs, we validate the effectiveness of our solution that deepens the understanding of ViTs from head importance, head attention strength, and head attention pattern.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiran Li (29 papers)
  2. Junpeng Wang (53 papers)
  3. Xin Dai (27 papers)
  4. Liang Wang (512 papers)
  5. Chin-Chia Michael Yeh (43 papers)
  6. Yan Zheng (102 papers)
  7. Wei Zhang (1489 papers)
  8. Kwan-Liu Ma (80 papers)
Citations (14)