Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Full-Stack Search Technique for Domain Optimized Deep Learning Accelerators (2105.12842v3)

Published 26 May 2021 in cs.LG, cs.AR, and cs.PF

Abstract: The rapidly-changing deep learning landscape presents a unique opportunity for building inference accelerators optimized for specific datacenter-scale workloads. We propose Full-stack Accelerator Search Technique (FAST), a hardware accelerator search framework that defines a broad optimization environment covering key design decisions within the hardware-software stack, including hardware datapath, software scheduling, and compiler passes such as operation fusion and tensor padding. In this paper, we analyze bottlenecks in state-of-the-art vision and NLP models, including EfficientNet and BERT, and use FAST to design accelerators capable of addressing these bottlenecks. FAST-generated accelerators optimized for single workloads improve Perf/TDP by 3.7x on average across all benchmarks compared to TPU-v3. A FAST-generated accelerator optimized for serving a suite of workloads improves Perf/TDP by 2.4x on average compared to TPU-v3. Our return on investment analysis shows that FAST-generated accelerators can potentially be practical for moderate-sized datacenter deployments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dan Zhang (171 papers)
  2. Safeen Huda (4 papers)
  3. Ebrahim Songhori (3 papers)
  4. Kartik Prabhu (33 papers)
  5. Quoc Le (39 papers)
  6. Anna Goldie (19 papers)
  7. Azalia Mirhoseini (40 papers)
Citations (45)