Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression (2004.04124v2)

Published 8 Apr 2020 in cs.CL and cs.LG

Abstract: BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks. However, a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests, raising the necessity of model compression. Existing solutions leverage the knowledge distillation framework to learn a smaller model that imitates the behaviors of BERT. However, the training procedure of knowledge distillation is expensive itself as it requires sufficient training data to imitate the teacher model. In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yihuan Mao (6 papers)
  2. Yujing Wang (53 papers)
  3. Chufan Wu (3 papers)
  4. Chen Zhang (403 papers)
  5. Yang Wang (670 papers)
  6. Yaming Yang (39 papers)
  7. Quanlu Zhang (14 papers)
  8. Yunhai Tong (69 papers)
  9. Jing Bai (46 papers)
Citations (70)