Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality (2010.12730v3)

Published 24 Oct 2020 in cs.CL

Abstract: Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of LLMs as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to handle infrequent spellings. On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably long sequences and make it harder for the model to learn meaningful words. To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed--and thus, providing a practical method. Finally, we show that incorporating our module to mBERT significantly improves the performance on the social media linguistic code-switching evaluation (LinCE) benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Gustavo Aguilar (16 papers)
  2. Bryan McCann (18 papers)
  3. Tong Niu (25 papers)
  4. Nazneen Rajani (22 papers)
  5. Nitish Keskar (2 papers)
  6. Thamar Solorio (67 papers)
Citations (12)