Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Three-Pronged Approach to Cross-Lingual Adaptation with Multilingual LLMs (2406.17377v1)

Published 25 Jun 2024 in cs.CL

Abstract: Low-resource languages, by its very definition, tend to be under represented in the pre-training corpora of LLMs. In this work, we investigate three low-resource cross-lingual approaches that enable an LLM adapt to tasks in previously unseen languages. Llama-2 is an LLM where Indic languages, among many other language families, contribute to less than $0.005\%$ of the total $2$ trillion token pre-training corpora. In this work, we experiment with the English-dominated Llama-2 for cross-lingual transfer to three Indic languages, Bengali, Hindi, and Tamil as target languages. We study three approaches for cross-lingual transfer, under ICL and fine-tuning. One, we find that adding additional supervisory signals via a dominant language in the LLM, leads to improvements, both under in-context learning and fine-tuning. Two, adapting the target languages to word reordering may be beneficial under ICL, but its impact diminishes with fine tuning. Finally, continued pre-training in one low-resource language can improve model performance for other related low-resource languages.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Vaibhav Singh (11 papers)
  2. Amrith Krishna (16 papers)
  3. Karthika NJ (2 papers)
  4. Ganesh Ramakrishnan (88 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets