Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search (2001.04246v2)

Published 13 Jan 2020 in cs.CL and cs.LG

Abstract: Large pre-trained LLMs such as BERT have shown their effectiveness in various natural language processing tasks. However, the huge parameter size makes them difficult to be deployed in real-time applications that require quick inference with limited resources. Existing methods compress BERT into small models while such compression is task-independent, i.e., the same compressed BERT for all different downstream tasks. Motivated by the necessity and benefits of task-oriented BERT compression, we propose a novel compression method, AdaBERT, that leverages differentiable Neural Architecture Search to automatically compress BERT into task-adaptive small models for specific tasks. We incorporate a task-oriented knowledge distillation loss to provide search hints and an efficiency-aware loss as search constraints, which enables a good trade-off between efficiency and effectiveness for task-adaptive BERT compression. We evaluate AdaBERT on several NLP tasks, and the results demonstrate that those task-adaptive compressed models are 12.7x to 29.3x faster than BERT in inference time and 11.5x to 17.0x smaller in terms of parameter size, while comparable performance is maintained.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Daoyuan Chen (32 papers)
  2. Yaliang Li (117 papers)
  3. Minghui Qiu (58 papers)
  4. Zhen Wang (571 papers)
  5. Bofang Li (4 papers)
  6. Bolin Ding (112 papers)
  7. Hongbo Deng (20 papers)
  8. Jun Huang (126 papers)
  9. Wei Lin (207 papers)
  10. Jingren Zhou (198 papers)
Citations (102)