Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributionally Robust Multilingual Machine Translation (2109.04020v1)

Published 9 Sep 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Multilingual neural machine translation (MNMT) learns to translate multiple language pairs with a single model, potentially improving both the accuracy and the memory-efficiency of deployed models. However, the heavy data imbalance between languages hinders the model from performing uniformly across language pairs. In this paper, we propose a new learning objective for MNMT based on distributionally robust optimization, which minimizes the worst-case expected loss over the set of language pairs. We further show how to practically optimize this objective for large translation corpora using an iterated best response scheme, which is both effective and incurs negligible additional computational cost compared to standard empirical risk minimization. We perform extensive experiments on three sets of languages from two datasets and show that our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chunting Zhou (36 papers)
  2. Daniel Levy (29 papers)
  3. Xian Li (115 papers)
  4. Marjan Ghazvininejad (33 papers)
  5. Graham Neubig (342 papers)
Citations (24)