Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating Tokenizer Performance of Large Language Models Across Official Indian Languages (2411.12240v2)

Published 19 Nov 2024 in cs.CL and cs.AI

Abstract: LLMs based on transformer architectures have revolutionized a variety of domains, with tokenization playing a pivotal role in their pre-processing and fine-tuning stages. In multilingual models, particularly those tailored for Indic languages, effective tokenization is crucial for optimizing performance. This paper presents a comprehensive evaluation of tokenizers used by 12 LLMs across all 22 official languages of India, with a focus on comparing the efficiency of their tokenization processes. We employed the Normalized Sequence Length (NSL) as a key metric in our analysis. Our findings reveal that the SUTRA tokenizer outperforms all other models, including several Indic-specific models, excelling in 14 languages. Notable insights include the SUTRA tokenizer's superior handling of Indic languages, GPT-4o's advancement over its predecessor GPT-4 in processing Indian languages, and the limited performance of Project Indus in certain languages. This study underscores the critical importance of developing targeted tokenization strategies for multilingual and Indic-centric models, laying the groundwork for future improvements in tokenizer design to enhance linguistic coverage and model efficiency.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. S. Tamang (4 papers)
  2. D. J. Bora (3 papers)

Summary

Evaluating Tokenizer Performance of LLMs Across Official Indian Languages

The paper under discussion presents an in-depth evaluation of the tokenizer performance used in LLMs for the 22 officially recognized languages of India. Tokenization is a critical component of LLMs, predominantly influencing their preprocessing and fine-tuning capacities. The complexity and diversity inherent to Indic languages make this evaluation particularly pertinent, addressing a unique set of challenges within NLP.

The paper methodologically compares 12 distinct LLMs, both proprietary and open-weight, including renowned models such as GPT-4o, GPT-4, Meta's Llama, and TWO AI's SUTRA, among others. The evaluation employs the Normalized Sequence Length (NSL) as the primary metric, offering a quantitative basis to gauge tokenization efficiency across various linguistic terrains. This form of benchmarking is essential for refining model design and enhances the multilingual adeptness of LLMs.

The results indicate that the SUTRA tokenizer exhibits superior performance, excelling in 14 languages and consistently outperforming both Indic-specific and general LLMs. This emphasizes SUTRA's capacity to handle the intricate morphology and syntax present in Indian languages more effectively than its counterparts. Furthermore, GPT-4o was observed to deliver better performance over GPT-4, highlighting the evolutionary improvements in handling linguistic diversity. Despite the model-specific optimizations in Project Indus, its performance was suboptimal across several languages, likely due to limitations in handling languages written in scripts other than Devanagari.

The findings underscore the importance of tailored tokenization strategies, particularly in multilingual and Indic-centric models. They highlight SUTRA's potential as a reference architecture for further developments in the field. Comprehensive language coverage not only aids in the development of more robust NLP tools but also in the acceleration of AI's potential in multilingual contexts. Enhanced tokenization facilitates reduced token generation and computational efficiency, which are crucial for scaling AI applications in linguistically diverse environments such as India.

Moreover, the paper draws attention to potential areas for further research, such as improving tokenization for low-resource languages and enhancing script-independent token handling. Future directions may include hybrid approaches integrating neural network-based token segmentation techniques to further optimize tokenization performance across diverse linguistic families.

In conclusion, the paper provides a significant contribution to the understanding of tokenization within LLMs for Indic languages, revealing insights that could drive future innovations in AI language processing systems. The paper lays a foundation that encourages the development of models that are not only computationally efficient but also linguistically inclusive, advancing the horizon of multilingual NLP technologies.