Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weighted Sampling for Masked Language Modeling (2302.14225v2)

Published 28 Feb 2023 in cs.CL and cs.AI

Abstract: Masked LLMing (MLM) is widely used to pretrain LLMs. The standard random masking strategy in MLM causes the pre-trained LLMs (PLMs) to be biased toward high-frequency tokens. Representation learning of rare tokens is poor and PLMs have limited performance on downstream tasks. To alleviate this frequency bias issue, we propose two simple and effective Weighted Sampling strategies for masking tokens based on the token frequency and training loss. We apply these two strategies to BERT and obtain Weighted-Sampled BERT (WSBERT). Experiments on the Semantic Textual Similarity benchmark (STS) show that WSBERT significantly improves sentence embeddings over BERT. Combining WSBERT with calibration methods and prompt learning further improves sentence embeddings. We also investigate fine-tuning WSBERT on the GLUE benchmark and show that Weighted Sampling also improves the transfer learning capability of the backbone PLM. We further analyze and provide insights into how WSBERT improves token embeddings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Linhan Zhang (5 papers)
  2. Qian Chen (264 papers)
  3. Wen Wang (144 papers)
  4. Chong Deng (22 papers)
  5. Xin Cao (52 papers)
  6. Kongzhang Hao (7 papers)
  7. Yuxin Jiang (26 papers)
  8. Wei Wang (1793 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.