Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hybrid Dynamic Pruning: A Pathway to Efficient Transformer Inference (2407.12893v1)

Published 17 Jul 2024 in cs.LG and cs.AI

Abstract: In the world of deep learning, Transformer models have become very significant, leading to improvements in many areas from understanding language to recognizing images, covering a wide range of applications. Despite their success, the deployment of these models in real-time applications, particularly on edge devices, poses significant challenges due to their quadratic computational intensity and memory demands. To overcome these challenges we introduce a novel Hybrid Dynamic Pruning (HDP), an efficient algorithm-architecture co-design approach that accelerates transformers using head sparsity, block sparsity and approximation opportunities to reduce computations in attention and reduce memory access. With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based row-balanced block pruning to prune unimportant blocks in the attention matrix at run time, also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time. Also we propose an approximation method that reduces attention computations. To efficiently support these methods with lower latency and power efficiency, we propose a HDP co-processor architecture.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ghadeer Jaradat (1 paper)
  2. Mohammed Tolba (1 paper)
  3. Ghada Alsuhli (3 papers)
  4. Hani Saleh (10 papers)
  5. Mahmoud Al-Qutayri (19 papers)
  6. Thanos Stouraitis (4 papers)
  7. Baker Mohammad (10 papers)