Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data Movement Is All You Need: A Case Study on Optimizing Transformers (2007.00072v3)

Published 30 Jun 2020 in cs.LG and stat.ML

Abstract: Transformers are one of the most important machine learning workloads today. Training one is a very compute-intensive task, often taking days or weeks, and significant attention has been given to optimizing transformers. Despite this, existing implementations do not efficiently utilize GPUs. We find that data movement is the key bottleneck when training. Due to Amdahl's Law and massive improvements in compute performance, training has now become memory-bound. Further, existing frameworks use suboptimal data layouts. Using these insights, we present a recipe for globally optimizing data movement in transformers. We reduce data movement by up to 22.91% and overall achieve a 1.30x performance improvement over state-of-the-art frameworks when training a BERT encoder layer and 1.19x for the entire BERT. Our approach is applicable more broadly to optimizing deep neural networks, and offers insight into how to tackle emerging performance bottlenecks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Andrei Ivanov (17 papers)
  2. Nikoli Dryden (21 papers)
  3. Tal Ben-Nun (53 papers)
  4. Shigang Li (25 papers)
  5. Torsten Hoefler (203 papers)
Citations (114)

Summary

We haven't generated a summary for this paper yet.