Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning (2009.08065v4)

Published 17 Sep 2020 in cs.CL, cs.AI, and cs.LG

Abstract: Pre-trained large-scale LLMs have increasingly demonstrated high accuracy on many NLP tasks. However, the limited weight storage and computational speed on hardware platforms have impeded the popularity of pre-trained models, especially in the era of edge computing. In this work, we propose an efficient transformer-based large-scale language representation using hardware-friendly block structure pruning. We incorporate the reweighted group Lasso into block-structured pruning for optimization. Besides the significantly reduced weight storage and computation, the proposed approach achieves high compression rates. Experimental results on different models (BERT, RoBERTa, and DistilBERT) on the General Language Understanding Evaluation (GLUE) benchmark tasks show that we achieve up to 5.0x with zero or minor accuracy degradation on certain task(s). Our proposed method is also orthogonal to existing compact pre-trained LLMs such as DistilBERT using knowledge distillation, since a further 1.79x average compression rate can be achieved on top of DistilBERT with zero or minor accuracy degradation. It is suitable to deploy the final compressed model on resource-constrained edge devices.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Bingbing Li (24 papers)
  2. Zhenglun Kong (33 papers)
  3. Tianyun Zhang (26 papers)
  4. Ji Li (186 papers)
  5. Zhengang Li (31 papers)
  6. Hang Liu (135 papers)
  7. Caiwen Ding (98 papers)
Citations (59)