Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tokenization Is More Than Compression (2402.18376v2)

Published 28 Feb 2024 in cs.CL and cs.AI
Tokenization Is More Than Compression

Abstract: Tokenization is a foundational step in NLP tasks, bridging raw text and LLMs. Existing tokenization approaches like Byte-Pair Encoding (BPE) originate from the field of data compression, and it has been suggested that the effectiveness of BPE stems from its ability to condense text into a relatively small number of tokens. We test the hypothesis that fewer tokens lead to better downstream performance by introducing PathPiece, a new tokenizer that segments a document's text into the minimum number of tokens for a given vocabulary. Through extensive experimentation we find this hypothesis not to be the case, casting doubt on the understanding of the reasons for effective tokenization. To examine which other factors play a role, we evaluate design decisions across all three phases of tokenization: pre-tokenization, vocabulary construction, and segmentation, offering new insights into the design of effective tokenizers. Specifically, we illustrate the importance of pre-tokenization and the benefits of using BPE to initialize vocabulary construction. We train 64 LLMs with varying tokenization, ranging in size from 350M to 2.4B parameters, all of which are made publicly available.

Unraveling the Intricacies of Tokenization in NLP: A Study Through the Lens of PathPiece

Introduction

Tokenization, a pivotal preprocessing stage in NLP, translates human-readable text into tokens for subsequent utilization by statistical models. Bridging the gap between raw text and LLMs, tokenization significantly impacts the effectiveness of NLP applications. This paper scrutinizes tokenization beyond the conventional belief, questioning the effectiveness of reducing the number of tokens for improved downstream performance. Introducing PathPiece, a tokenizer designed to minimize the Corpus Token Count (CTC), this research provides a detailed examination of tokenization, shedding light on factors crucial for its effectiveness and debunking several prevailing assumptions within the field.

Related Work and Background

Tokenization traditionally is segmented into three stages: pre-tokenization, vocabulary construction, and segmentation. Each plays a unique role in splitting text into manipulable units for models. While Byte-Pair Encoding (BPE) and its variants like WordPiece and Unigram have dominated the scene, their focus primarily has been on compression efficiency, hypothesizing a link between fewer tokens and better model performance. However, the introduction of PathPiece challenges this notion by directly comparing CTC with downstream task performance across varied tokenization strategies.

The PathPiece Experiment

PathPiece emerged from examining whether a tokenizer that inherently minimizes token count could outperform traditional methods in downstream tasks. The tokenization process was dissected into its foundational stages, allowing a detailed analysis of how changes in each phase affect overall performance. Through training 64 LLMs with varying tokenization strategies and sizes, ranging from 350 million to 2.4 billion parameters, the paper provided an extensive dataset for comparison. The experiments included altering pre-tokenization rules, vocabulary construction mechanisms, and segmentation methods, offering rich insights into the tokenization process. Remarkably, PathPiece's top-down vocabulary construction approach presented an ideal scenario to explore the hypothesized benefits of reduced CTC.

Insights and Findings

Contrary to popular belief, the paper found no direct correlation between reduced CTC and enhanced downstream performance. This revelation challenges the core assumption behind the effectiveness of methods like BPE, suggesting that factors beyond mere token count compression play substantial roles in determining the effectiveness of tokenization strategies. Specifically, the paper highlighted:

  • The significance of pre-tokenization, with findings suggesting that how spaces and digits are handled can influence model performance more than the overall token count.
  • Variability in the efficacy of vocabulary construction methods, where a top-down approach using BPE for initializing vocabulary performed best among the tested strategies.
  • A nuanced understanding of segmentation, showing that the choice of segmentation method (including length vs. random tie-breaking strategies) does not significantly impact model performance when controlling for other variables.

Implications and Future Directions

This research contributes to a re-evaluation of tokenization as understood in NLP. By breaking down tokenization into its component stages and rigorously testing each, the paper invites a more nuanced appreciation of what constitutes effective tokenization. The findings open several avenues for future research, particularly in exploring the morphology and semantics inherent in the tokenization process and its impact on model understanding and performance. Additionally, the introduction of PathPiece and the comprehensive dataset of trained models provide valuable resources for continued exploration in this domain.

Conclusion

The investigation into the effects of tokenization on downstream performance using PathPiece reveals a complex landscape where reducing the number of tokens does not necessarily equate to better model performance. This challenges previous assumptions about tokenization efficacy, emphasizing the importance of a multifaceted approach to tokenizer design. Through a meticulous examination of tokenization stages and their impact on LLMs, this paper contributes to a deeper understanding of the fundamental processes underpinning NLP, paving the way for more informed and effective tokenizer developments in the future.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Mathqa: Towards interpretable math word problem solving with operation-based formalisms.
  2. Datasheet for the pile. CoRR, abs/2201.07311.
  3. Piqa: Reasoning about physical commonsense in natural language.
  4. Kaj Bostrom and Greg Durrett. 2020. Byte pair encoding is suboptimal for language model pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4617–4624, Online. Association for Computational Linguistics.
  5. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, 30:31–40.
  6. Copa-sse: Semi-structured explanations for commonsense reasoning.
  7. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457.
  8. Pavlos S. Efraimidis. 2010. Weighted random sampling over data streams. CoRR, abs/1012.0256.
  9. Philip Gage. 1994. A new algorithm for data compression. C Users J., 12(2):23–38.
  10. Matthias Gallé. 2019. Investigating the effectiveness of BPE: The power of shorter sequences. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1375–1381, Hong Kong, China. Association for Computational Linguistics.
  11. The pile: An 800gb dataset of diverse text for language modeling.
  12. A framework for few-shot language model evaluation.
  13. Improving tokenisation by alternative treatment of spaces. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11430–11443, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  14. Gregory Grefenstette. 1999. Tokenization, pages 117–133. Springer Netherlands, Dordrecht.
  15. From characters to words: the turning point of BPE merges. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3454–3468, Online. Association for Computational Linguistics.
  16. Dynamic programming encoding for subword segmentation in neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3042–3051, Online. Association for Computational Linguistics.
  17. Measuring massive multitask language understanding.
  18. Superbizarre is not superb: Derivational morphology improves BERT’s interpretation of complex words. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3594–3608, Online. Association for Computational Linguistics.
  19. An embarrassingly simple method to mitigate undesirable properties of pretrained language model tokenizers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 385–393, Dublin, Ireland. Association for Computational Linguistics.
  20. Cassandra L Jacobs and Yuval Pinter. 2022. Lost in space marking. arXiv preprint arXiv:2208.01561.
  21. Jean Kaddour. 2023. The minipile challenge for data-efficient language models.
  22. Stav Klein and Reut Tsarfaty. 2020. Getting the ##life out of living: How adequate are word-pieces for modelling complex morphology? In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 204–209, Online. Association for Computational Linguistics.
  23. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics.
  24. Race: Large-scale reading comprehension dataset from examinations.
  25. The winograd schema challenge. In 13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012, Proceedings of the International Conference on Knowledge Representation and Reasoning, pages 552–561. Institute of Electrical and Electronics Engineers Inc. 13th International Conference on the Principles of Knowledge Representation and Reasoning, KR 2012 ; Conference date: 10-06-2012 Through 14-06-2012.
  26. XLM-V: Overcoming the vocabulary bottleneck in multilingual masked language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13142–13152, Singapore. Association for Computational Linguistics.
  27. Tokenization impacts multilingual language modeling: Assessing vocabulary allocation and overlap across languages. In Findings of the Association for Computational Linguistics: ACL 2023, pages 5661–5681, Toronto, Canada. Association for Computational Linguistics.
  28. Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp.
  29. Subword language modeling with neural networks. Preprint available at: https://api.semanticscholar.org/CorpusID:46542477.
  30. Qa4mre 2011-2013: Overview of question answering for machine reading evaluation. In CLEF 2013, LNCS 8138, pages 303–320.
  31. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882–1892, Online. Association for Computational Linguistics.
  32. Jonne Saleva and Constantine Lignos. 2023. What changes when you randomly choose BPE merge operations? not much. In The Fourth Workshop on Insights from Negative Results in NLP, pages 59–66, Dubrovnik, Croatia. Association for Computational Linguistics.
  33. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149–5152.
  34. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics.
  35. BERT is not an interlingua and the bias of tokenization. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 47–55, Hong Kong, China. Association for Computational Linguistics.
  36. Llama: Open and efficient foundation language models.
  37. A. Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13(2):260–269.
  38. Jeffrey S. Vitter. 1985. Random sampling with a reservoir. ACM Transactions on Mathematical Software, 11(1):37–57.
  39. Crowdsourcing multiple choice science questions. ArXiv, abs/1707.06209.
  40. F Wilcoxon. 1945. Individual comparisons by ranking methods. biom. bull., 1, 80–83.
  41. Google’s neural machine translation system: Bridging the gap between human and machine translation.
  42. Shaked Yehezkel and Yuval Pinter. 2023. Incorporating context into subword vocabularies. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 623–635, Dubrovnik, Croatia. Association for Computational Linguistics.
  43. Tokenization and the noiseless channel. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5184–5207, Toronto, Canada. Association for Computational Linguistics.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Craig W. Schmidt (6 papers)
  2. Varshini Reddy (12 papers)
  3. Haoran Zhang (102 papers)
  4. Alec Alameddine (1 paper)
  5. Omri Uzan (3 papers)
  6. Yuval Pinter (41 papers)
  7. Chris Tanner (18 papers)
Citations (11)