Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual Transfer Learning for QA Using Translation as Data Augmentation (2012.05958v1)

Published 10 Dec 2020 in cs.CL

Abstract: Prior work on multilingual question answering has mostly focused on using large multilingual pre-trained LLMs (LM) to perform zero-shot language-wise learning: train a QA model on English and test on other languages. In this work, we explore strategies that improve cross-lingual transfer by bringing the multilingual embeddings closer in the semantic space. Our first strategy augments the original English training data with machine translation-generated data. This results in a corpus of multilingual silver-labeled QA pairs that is 14 times larger than the original training set. In addition, we propose two novel strategies, language adversarial training and language arbitration framework, which significantly improve the (zero-resource) cross-lingual transfer performance and result in LM embeddings that are less language-variant. Empirically, we show that the proposed models outperform the previous zero-shot baseline on the recently introduced multilingual MLQA and TyDiQA datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mihaela Bornea (10 papers)
  2. Lin Pan (23 papers)
  3. Sara Rosenthal (21 papers)
  4. Radu Florian (54 papers)
  5. Avirup Sil (45 papers)
Citations (37)