Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NarrowBERT: Accelerating Masked Language Model Pretraining and Inference (2301.04761v2)

Published 11 Jan 2023 in cs.CL and cs.LG

Abstract: Large-scale LLM pretraining is a very successful form of self-supervised learning in natural language processing, but it is increasingly expensive to perform as the models and pretraining corpora have become larger over time. We propose NarrowBERT, a modified transformer encoder that increases the throughput for masked LLM pretraining by more than $2\times$. NarrowBERT sparsifies the transformer model such that the self-attention queries and feedforward layers only operate on the masked tokens of each sentence during pretraining, rather than all of the tokens as with the usual transformer encoder. We also show that NarrowBERT increases the throughput at inference time by as much as $3.5\times$ with minimal (or no) performance degradation on sentence encoding tasks like MNLI. Finally, we examine the performance of NarrowBERT on the IMDB and Amazon reviews classification and CoNLL NER tasks and show that it is also comparable to standard BERT performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Haoxin Li (13 papers)
  2. Phillip Keung (11 papers)
  3. Daniel Cheng (7 papers)
  4. Jungo Kasai (38 papers)
  5. Noah A. Smith (224 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.