Lossless Vocabulary Reduction for Auto-Regressive Language Models (2510.08102v1)
Abstract: Tokenization -- the process of decomposing a given text into a sequence of subwords called tokens -- is one of the key components in the development of LLMs. Particularly, auto-regressive LLMs generate texts token by token, i.e., by predicting the next-token distribution given the previous ones, and thus tokenization directly affects their efficiency in text generation. Since each LLM has their own vocabulary as a set of possible tokens, they struggle to cooperate with each other at the level of next-token distributions such as model ensemble. In this paper, we establish a theoretical framework of lossless vocabulary reduction, which efficiently converts a given auto-regressive LLM into the one with an arbitrarily small vocabulary without any loss in accuracy. As an application, we demonstrate that LLMs with different tokenization can cooperate with each other efficiently through their maximal common vocabulary.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.