Pretraining Data and Tokenizer for Indic LLM (2407.12481v1)
Abstract: We present a novel approach to data preparation for developing multilingual Indic LLM. Our meticulous data acquisition spans open-source and proprietary sources, including Common Crawl, Indic books, news articles, and Wikipedia, ensuring a diverse and rich linguistic representation. For each Indic language, we design a custom preprocessing pipeline to effectively eliminate redundant and low-quality text content. Additionally, we perform deduplication on Common Crawl data to address the redundancy present in 70% of the crawled web pages. This study focuses on developing high-quality data, optimizing tokenization for our multilingual dataset for Indic LLMs with 3B and 7B parameters, engineered for superior performance in Indic languages. We introduce a novel multilingual tokenizer training strategy, demonstrating our custom-trained Indic tokenizer outperforms the state-of-the-art OpenAI Tiktoken tokenizer, achieving a superior token-to-word ratio for Indic languages.
- Rahul Kumar (169 papers)
- Shubham Kakde (1 paper)
- Divyansh Rajput (1 paper)
- Daud Ibrahim (1 paper)
- Rishabh Nahata (1 paper)
- Pidathala Sowjanya (1 paper)
- Deepak Kumar (104 papers)