Scaffold-BPE: Enhancing Byte Pair Encoding for Large Language Models with Simple and Effective Scaffold Token Removal (2404.17808v3)
Abstract: Byte Pair Encoding (BPE) serves as a foundation method for text tokenization in the NLP field. Despite its wide adoption, the original BPE algorithm harbors an inherent flaw: it inadvertently introduces a frequency imbalance for tokens in the text corpus. Since BPE iteratively merges the most frequent token pair in the text corpus to generate a new token and keeps all generated tokens in the vocabulary, it unavoidably holds tokens that primarily act as components of a longer token and appear infrequently on their own. We term such tokens as Scaffold Tokens. Due to their infrequent occurrences in the text corpus, Scaffold Tokens pose a learning imbalance issue. To address that issue, we propose Scaffold-BPE, which incorporates a dynamic scaffold token removal mechanism by parameter-free, computation-light, and easy-to-implement modifications to the original BPE method. This novel approach ensures the exclusion of low-frequency Scaffold Tokens from the token representations for given texts, thereby mitigating the issue of frequency imbalance and facilitating model training. On extensive experiments across LLMing and even machine translation, Scaffold-BPE consistently outperforms the original BPE, well demonstrating its effectiveness.
- Haoran Lian (6 papers)
- Yizhe Xiong (14 papers)
- Jianwei Niu (42 papers)
- Shasha Mo (2 papers)
- Zhenpeng Su (17 papers)
- Zijia Lin (43 papers)
- Peng Liu (372 papers)
- Hui Chen (298 papers)
- Guiguang Ding (79 papers)
- Jungong Han (111 papers)