2000 character limit reached
Bit-level BPE: Below the byte boundary (2506.07541v1)
Published 9 Jun 2025 in cs.CL
Abstract: Byte-level fallbacks for subword tokenization have become a common practice in LLMs. In particular, it has been demonstrated to be incredibly effective as a pragmatic solution for preventing OOV, especially in the context of larger models. However, breaking a character down to individual bytes significantly increases the sequence length for long-tail tokens in languages such as Chinese, Japanese, and Korean (CJK) and other character-diverse contexts such as emoji. The increased sequence length results in longer computation during both training and inference. In this work, we propose a simple compression technique that reduces the sequence length losslessly.