Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages (2305.16307v3)

Published 25 May 2023 in cs.CL
IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages

Abstract: India has a rich linguistic landscape with languages from 4 major language families spoken by over a billion people. 22 of these languages are listed in the Constitution of India (referred to as scheduled languages) are the focus of this work. Given the linguistic diversity, high-quality and accessible Machine Translation (MT) systems are essential in a country like India. Prior to this work, there was (i) no parallel training data spanning all 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India. In this work, we aim to address this gap by focusing on the missing pieces required for enabling wide, easy, and open access to good machine translation systems for all 22 scheduled Indian languages. We identify four key areas of improvement: curating and creating larger training datasets, creating diverse and high-quality benchmarks, training multilingual models, and releasing models with open access. Our first contribution is the release of the Bharat Parallel Corpus Collection (BPCC), the largest publicly available parallel corpora for Indic languages. BPCC contains a total of 230M bitext pairs, of which a total of 126M were newly added, including 644K manually translated sentence pairs created as part of this work. Our second contribution is the release of the first n-way parallel benchmark covering all 22 Indian languages, featuring diverse domains, Indian-origin content, and source-original test sets. Next, we present IndicTrans2, the first model to support all 22 languages, surpassing existing models on multiple existing and new benchmarks created as a part of this work. Lastly, to promote accessibility and collaboration, we release our models and associated data with permissive licenses at https://github.com/AI4Bharat/IndicTrans2.

High-Quality and Accessible Machine Translation for Scheduled Indian Languages

The paper entitled "IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages" addresses the complex challenge of developing machine translation (MT) systems for the diverse linguistic landscape of India. It focuses on the 22 scheduled Indian languages, aiming to fill significant gaps in data availability, benchmarking, and model support. This work makes notable contributions in four key areas: data creation, benchmarking, model training, and open-access distribution.

Contributions and Highlights

  1. Data Creation:
    • The release of the Bharat Parallel Corpus Collection (BPCC), comprising 230 million bitext pairs, represents the largest publicly available parallel corpora for Indic languages. This includes 126 million newly curated pairs with substantial contributions from mined data and 644,000 manually translated sentence pairs. This corpus addresses prior deficiencies in data availability for several low-resource languages.
  2. Benchmarking:
    • The creation of the IN22 benchmark, an n-way parallel evaluation set, includes test cases for all scheduled languages across varied domains, including conversational formats. This initiative addresses the lack of robust, culturally relevant benchmarks for Indian languages.
  3. Model Development:
    • IndicTrans2, the first translation model to accommodate all 22 languages, shows improved performance over existing models in both En-Indic and Indic-En directions across multiple metrics, including chrF++ and COMET. The models utilize state-of-the-art techniques like back-translation and knowledge distillation to enhance translation quality and model efficiency.
  4. Open Access:
    • The deployment of models and datasets under permissive licenses maximizes accessibility, promoting further research and commercial use. The paper highlights the utility of offering IndicTrans2-Dist, a computationally efficient variant that preserves performance.

Implications and Future Directions

The impact of this research is multifaceted, extending from academic inquiry to practical utility in sectors such as governance, education, and national integration. The work improves language inclusivity in digital technologies, catalyzing socio-economic growth and access to information.

Future research may focus on:

  • Enhancing model performance for extremely low-resource languages through innovative data curation techniques.
  • Developing more sophisticated, context-aware evaluation metrics tailored for linguistically rich environments.
  • Expanding model capabilities to support additional scripts and unscheduled languages, accommodating the evolving linguistic dynamics of India.

By establishing comprehensive benchmarks and publicly accessible resources, IndicTrans2 sets a precedent for the development of inclusive MT systems globally. The paper's methodologies may serve as a blueprint for similar initiatives in other linguistically diverse regions, underscoring its broader significance in the field of machine translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Jay Gala (13 papers)
  2. Pranjal A. Chitale (3 papers)
  3. Raghavan AK (2 papers)
  4. Varun Gumma (14 papers)
  5. Sumanth Doddapaneni (16 papers)
  6. Aswanth Kumar (3 papers)
  7. Janki Nawale (3 papers)
  8. Anupama Sujatha (1 paper)
  9. Ratish Puduppully (20 papers)
  10. Vivek Raghavan (14 papers)
  11. Pratyush Kumar (44 papers)
  12. Mitesh M. Khapra (79 papers)
  13. Raj Dabre (65 papers)
  14. Anoop Kunchukuttan (45 papers)
Citations (96)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com