Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Machine Translation for Cross-lingual Classification (2305.14240v1)

Published 23 May 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Machine Translation (MT) has been widely used for cross-lingual classification, either by translating the test set into English and running inference with a monolingual model (translate-test), or translating the training set into the target languages and finetuning a multilingual model (translate-train). However, most research in the area focuses on the multilingual models rather than the MT component. We show that, by using a stronger MT system and mitigating the mismatch between training on original text and running inference on machine translated text, translate-test can do substantially better than previously assumed. The optimal approach, however, is highly task dependent, as we identify various sources of cross-lingual transfer gap that affect different tasks and approaches differently. Our work calls into question the dominance of multilingual models for cross-lingual classification, and prompts to pay more attention to MT-based baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mikel Artetxe (52 papers)
  2. Vedanuj Goswami (19 papers)
  3. Shruti Bhosale (18 papers)
  4. Angela Fan (49 papers)
  5. Luke Zettlemoyer (225 papers)
Citations (29)