Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FTRANS: Energy-Efficient Acceleration of Transformers using FPGA (2007.08563v1)

Published 16 Jul 2020 in cs.DC and cs.LG

Abstract: In NLP, the "Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using sequence-aligned recurrent neural networks (RNNs) or convolution, and it achieved significant improvements for sequence to sequence tasks. The introduced intensive computation and storage of these pre-trained language representations has impeded their popularity into computation and memory-constrained devices. The field-programmable gate array (FPGA) is widely used to accelerate deep learning algorithms for its high parallelism and low latency. However, the trained models are still too large to accommodate to an FPGA fabric. In this paper, we propose an efficient acceleration framework, Ftrans, for transformer-based large scale language representations. Our framework includes enhanced block-circulant matrix (BCM)-based weight representation to enable model compression on large-scale language representations at the algorithm level with few accuracy degradation, and an acceleration design at the architecture level. Experimental results show that our proposed framework significantly reduces the model size of NLP models by up to 16 times. Our FPGA design achieves 27.07x and 81x improvement in performance and energy efficiency compared to CPU, and up to 8.80x improvement in energy efficiency compared to GPU.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Bingbing Li (24 papers)
  2. Santosh Pandey (18 papers)
  3. Haowen Fang (12 papers)
  4. Yanjun Lyv (1 paper)
  5. Ji Li (186 papers)
  6. Jieyang Chen (25 papers)
  7. Mimi Xie (14 papers)
  8. Lipeng Wan (27 papers)
  9. Hang Liu (135 papers)
  10. Caiwen Ding (98 papers)
Citations (143)