Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bilingual Adaptation of Monolingual Foundation Models (2407.12869v1)

Published 13 Jul 2024 in cs.CL and cs.AI

Abstract: We present an efficient method for adapting a monolingual LLM to another language, addressing challenges of catastrophic forgetting and tokenizer limitations. We focus this study on adapting Llama 2 to Arabic. Our two-stage approach begins with expanding the vocabulary and training only the embeddings matrix, followed by full model continual pretraining on a bilingual corpus. By continually pretraining on a mix of Arabic and English corpora, the model retains its proficiency in English while acquiring capabilities in Arabic. Our approach results in significant improvements in Arabic and slight enhancements in English, demonstrating cost-effective cross-lingual transfer. We also perform extensive ablations on embedding initialization techniques, data mix ratios, and learning rates and release a detailed training recipe.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (20)
  1. Gurpreet Gosal (9 papers)
  2. Yishi Xu (11 papers)
  3. Gokul Ramakrishnan (3 papers)
  4. Rituraj Joshi (4 papers)
  5. Avraham Sheinin (3 papers)
  6. Zhiming (4 papers)
  7. Chen (63 papers)
  8. Biswajit Mishra (9 papers)
  9. Natalia Vassilieva (11 papers)
  10. Joel Hestness (23 papers)
  11. Neha Sengupta (8 papers)
  12. Sunil Kumar Sahu (12 papers)
  13. Bokang Jia (3 papers)
  14. Satheesh Katipomu (3 papers)
  15. Onkar Pandit (4 papers)
  16. Samta Kamboj (4 papers)
  17. Rahul Pal (4 papers)
  18. Parvez Mullah (3 papers)
  19. Soundar Doraiswamy (2 papers)
  20. Mohamed El Karim Chami (1 paper)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com