Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning (2111.04198v4)

Published 7 Nov 2021 in cs.CL

Abstract: Masked LLMs (MLMs) such as BERT and RoBERTa have revolutionized the field of Natural Language Understanding in the past few years. However, existing pre-trained MLMs often output an anisotropic distribution of token representations that occupies a narrow subset of the entire representation space. Such token representations are not ideal, especially for tasks that demand discriminative semantic meanings of distinct tokens. In this work, we propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training approach that encourages BERT to learn an isotropic and discriminative distribution of token representations. TaCL is fully unsupervised and requires no additional data. We extensively test our approach on a wide range of English and Chinese benchmarks. The results show that TaCL brings consistent and notable improvements over the original BERT model. Furthermore, we conduct detailed analysis to reveal the merits and inner-workings of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yixuan Su (35 papers)
  2. Fangyu Liu (59 papers)
  3. Zaiqiao Meng (42 papers)
  4. Tian Lan (162 papers)
  5. Lei Shu (82 papers)
  6. Ehsan Shareghi (54 papers)
  7. Nigel Collier (83 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.