Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Eigen Analysis of Self-Attention and its Reconstruction from Partial Computation (2106.08823v1)

Published 16 Jun 2021 in cs.LG

Abstract: State-of-the-art transformer models use pairwise dot-product based self-attention, which comes at a computational cost quadratic in the input sequence length. In this paper, we investigate the global structure of attention scores computed using this dot product mechanism on a typical distribution of inputs, and study the principal components of their variation. Through eigen analysis of full attention score matrices, as well as of their individual rows, we find that most of the variation among attention scores lie in a low-dimensional eigenspace. Moreover, we find significant overlap between these eigenspaces for different layers and even different transformer models. Based on this, we propose to compute scores only for a partial subset of token pairs, and use them to estimate scores for the remaining pairs. Beyond investigating the accuracy of reconstructing attention scores themselves, we investigate training transformer models that employ these approximations, and analyze the effect on overall accuracy. Our analysis and the proposed method provide insights into how to balance the benefits of exact pair-wise attention and its significant computational expense.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Srinadh Bhojanapalli (44 papers)
  2. Ayan Chakrabarti (42 papers)
  3. Himanshu Jain (19 papers)
  4. Sanjiv Kumar (123 papers)
  5. Michal Lukasik (23 papers)
  6. Andreas Veit (29 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.