Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages (2104.05596v4)

Published 12 Apr 2021 in cs.CL

Abstract: We present Samanantar, the largest publicly available parallel corpora collection for Indic languages. The collection contains a total of 49.7 million sentence pairs between English and 11 Indic languages (from two language families). Specifically, we compile 12.4 million sentence pairs from existing, publicly-available parallel corpora, and additionally mine 37.4 million sentence pairs from the web, resulting in a 4x increase. We mine the parallel sentences from the web by combining many corpora, tools, and methods: (a) web-crawled monolingual corpora, (b) document OCR for extracting sentences from scanned documents, (c) multilingual representation models for aligning sentences, and (d) approximate nearest neighbor search for searching in a large collection of sentences. Human evaluation of samples from the newly mined corpora validate the high quality of the parallel sentences across 11 languages. Further, we extract 83.4 million sentence pairs between all 55 Indic language pairs from the English-centric parallel corpus using English as the pivot language. We trained multilingual NMT models spanning all these languages on Samanantar, which outperform existing models and baselines on publicly available benchmarks, such as FLORES, establishing the utility of Samanantar. Our data and models are available publicly at https://ai4bharat.iitm.ac.in/samanantar and we hope they will help advance research in NMT and multilingual NLP for Indic languages.

Overview of "Samanantar: The Largest Publicly Available Parallel Corpora Collection for Indic Languages"

The paper introduces "Samanantar," which is positioned as the most extensive publicly available parallel corpus for Indic languages. It encompasses 49.7 million sentence pairs between English and 11 Indic languages, achieved by combining several previously available resources and novel mining efforts. The paper delineates a significant methodological advancement in the compilation and augmentation of parallel corpora, critical for developing machine translation (MT) models in low-resource language settings.

Core Contributions

  • Corpus Compilation: The authors highlight the creation of 49.7 million parallel sentence pairs featuring 11 Indic languages, including Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, and Telugu. This corpus incorporates 12.4 million sentences from existing sources and introduces 37.4 million newly mined pairs from web sources, effectively quadrupling the available data.
  • Methodological Innovations: Mining parallel sentences from diverse sources is achieved through a synergy of tools and techniques, such as web-crawled monolingual corpora, OCR for scanned documents, multilingual representation models for sentence alignment, and approximate nearest-neighbor search for large datasets. Such methodologies ensure high quality and scalability in extracting parallel sentences.
  • Multilingual NMT Models: Leveraging the Samanantar corpus, new multilingual MT models, specifically IndicTrans, exhibit superior performance over existing models and benchmarks on numerous test sets, such as FLORES.

Quantitative Results

The paper boasts several robust results. The creation of sentence pairs between English and Indic languages marked a substantial increase, with quality validated through human annotation, showing high semantic textual similarity. Specifically, the use of IndicTrans models trained on Samanantar led to improved BLEU scores compared to existing open-source models and even outperformed commercial MT solutions on several benchmarks.

Implications

The research carries significant implications for both practical applications and theoretical advancements in MT and NLP:

  • Practical: By providing a comprehensive resource for Indic language translation, this corpus aids in building more effective and efficient MT systems, crucial for digital inclusivity in linguistically diverse regions like the Indian subcontinent.
  • Theoretical: The work underscores the potential improvements that can be obtained in low-resource language settings through corpus augmentation and sophisticated data mining techniques, contributing to the broader understanding of knowledge transfer in multilingual models.

Future Directions

Looking forward, further refinements in LaBSE alignments and extending pre-training on Indic-specific corpora warrant exploration. Development of a monolingual script-mT5 akin model specifically designed for Indic languages is suggested to leverage the full potential of the enlarged parallel corpora and optimize MT further across different domains and language pairs.

The Samanantar corpus, alongside the IndicTrans model, is an instrumental step towards enhancing language technologies for Indic languages. The work establishes a rigorous benchmark for future research endeavors in the domain of Indic languages and multilingual NLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Gowtham Ramesh (6 papers)
  2. Sumanth Doddapaneni (16 papers)
  3. Aravinth Bheemaraj (2 papers)
  4. Mayank Jobanputra (5 papers)
  5. Raghavan AK (2 papers)
  6. Ajitesh Sharma (1 paper)
  7. Sujit Sahoo (1 paper)
  8. Harshita Diddee (12 papers)
  9. Mahalakshmi J (1 paper)
  10. Divyanshu Kakwani (2 papers)
  11. Navneet Kumar (6 papers)
  12. Aswin Pradeep (1 paper)
  13. Srihari Nagaraj (2 papers)
  14. Kumar Deepak (2 papers)
  15. Vivek Raghavan (14 papers)
  16. Anoop Kunchukuttan (45 papers)
  17. Pratyush Kumar (44 papers)
  18. Mitesh Shantadevi Khapra (1 paper)
Citations (218)