Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FuseMax: Leveraging Extended Einsums to Optimize Attention Accelerator Design (2406.10491v3)

Published 15 Jun 2024 in cs.AR

Abstract: Attention for transformers is a critical workload that has recently received significant "attention" as a target for custom acceleration. Yet, while prior work succeeds in reducing attention's memory-bandwidth requirements, it creates load imbalance between operators that comprise the attention computation (resulting in severe compute under-utilization) and requires on-chip memory that scales with sequence length (which is expected to grow over time). This paper ameliorates these issues, enabling attention with nearly 100% compute utilization, no off-chip memory traffic bottlenecks, and on-chip buffer size requirements that are independent of sequence length. The main conceptual contribution is to use a recently proposed abstraction -- the cascade of Einsums -- to describe, formalize, and taxonomize the space of attention algorithms that appear in the literature. In particular, we show how Einsum cascades can be used to infer non-trivial lower bounds on the number of passes a kernel must take through its input data, which has implications for either required on-chip buffer capacity or memory traffic. We show how this notion can be used to meaningfully divide the space of attention algorithms into several categories and use these categories to inform our design process. Based on the above characterization, we propose FuseMax -- a novel mapping and binding of attention onto a spatial array-style architecture. On attention, in an iso-area comparison, FuseMax achieves an average 6.7x speedup over the prior state-of-the-art, FLAT, while using 79\% of the energy. Similarly, on full end-to-end transformer inference, FuseMax achieves an average 5.3x speedup over FLAT using 83 of the energy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Nandeeka Nayak (2 papers)
  2. Xinrui Wu (10 papers)
  3. Toluwanimi O. Odemuyiwa (3 papers)
  4. Michael Pellauer (16 papers)
  5. Joel S. Emer (13 papers)
  6. Christopher W. Fletcher (13 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.