2000 character limit reached
Formalizing BPE Tokenization (2309.08715v1)
Published 15 Sep 2023 in cs.FL
Abstract: In this paper, we formalize practical byte pair encoding tokenization as it is used in LLMs and other NLP systems, in particular we formally define and investigate the semantics of the SentencePiece and HuggingFace tokenizers, in particular how they relate to each other, depending on how the tokenization rules are constructed. Beyond this we consider how tokenization can be performed in an incremental fashion, as well as doing it left-to-right using an amount of memory constant in the length of the string, enabling e.g. using a finite state string-to-string transducer.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.