Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Token Dropping Strategy in Efficient BERT Pretraining (2305.15273v1)

Published 24 May 2023 in cs.CL

Abstract: Token dropping is a recently-proposed strategy to speed up the pretraining of masked LLMs, such as BERT, by skipping the computation of a subset of the input tokens at several middle layers. It can effectively reduce the training time without degrading much performance on downstream tasks. However, we empirically find that token dropping is prone to a semantic loss problem and falls short in handling semantic-intense tasks. Motivated by this, we propose a simple yet effective semantic-consistent learning method (ScTD) to improve the token dropping. ScTD aims to encourage the model to learn how to preserve the semantic information in the representation space. Extensive experiments on 12 tasks show that, with the help of our ScTD, token dropping can achieve consistent and significant performance gains across all task types and model sizes. More encouragingly, ScTD saves up to 57% of pretraining time and brings up to +1.56% average improvement over the vanilla token dropping.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Qihuang Zhong (22 papers)
  2. Liang Ding (159 papers)
  3. Juhua Liu (37 papers)
  4. Xuebo Liu (54 papers)
  5. Min Zhang (630 papers)
  6. Bo Du (264 papers)
  7. Dacheng Tao (829 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.