Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

QAmeleon: Multilingual QA with Only 5 Examples (2211.08264v2)

Published 15 Nov 2022 in cs.CL

Abstract: The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA). Such annotated datasets however are difficult and costly to collect, and rarely exist in languages other than English, rendering QA technology inaccessible to underrepresented languages. An alternative to building large monolingual training datasets is to leverage pre-trained LLMs (PLMs) under a few-shot learning setting. Our approach, QAmeleon, uses a PLM to automatically generate multilingual data upon which QA models are trained, thus avoiding costly annotation. Prompt tuning the PLM for data synthesis with only five examples per language delivers accuracy superior to translation-based baselines, bridges nearly 60% of the gap between an English-only baseline and a fully supervised upper bound trained on almost 50,000 hand labeled examples, and always leads to substantial improvements compared to fine-tuning a QA model directly on labeled examples in low resource settings. Experiments on the TyDiQA-GoldP and MLQA benchmarks show that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Priyanka Agrawal (15 papers)
  2. Chris Alberti (23 papers)
  3. Fantine Huot (19 papers)
  4. Joshua Maynez (28 papers)
  5. Ji Ma (72 papers)
  6. Sebastian Ruder (93 papers)
  7. Kuzman Ganchev (13 papers)
  8. Dipanjan Das (42 papers)
  9. Mirella Lapata (135 papers)
Citations (22)

Summary

We haven't generated a summary for this paper yet.