Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parameter and Data Efficient Continual Pre-training for Robustness to Dialectal Variance in Arabic (2211.03966v1)

Published 8 Nov 2022 in cs.CL and cs.LG

Abstract: The use of multilingual LLMs for tasks in low and high-resource languages has been a success story in deep learning. In recent times, Arabic has been receiving widespread attention on account of its dialectal variance. While prior research studies have tried to adapt these multilingual models for dialectal variants of Arabic, it still remains a challenging problem owing to the lack of sufficient monolingual dialectal data and parallel translation data of such dialectal variants. It remains an open problem on whether the limited dialectical data can be used to improve the models trained in Arabic on its dialectal variants. First, we show that multilingual-BERT (mBERT) incrementally pretrained on Arabic monolingual data takes less training time and yields comparable accuracy when compared to our custom monolingual Arabic model and beat existing models (by an avg metric of +$6.41$). We then explore two continual pre-training methods -- (1) using small amounts of dialectical data for continual finetuning and (2) parallel Arabic to English data and a Translation LLMing loss function. We show that both approaches help improve performance on dialectal classification tasks ($+4.64$ avg. gain) when used on monolingual models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Soumajyoti Sarkar (21 papers)
  2. Kaixiang Lin (22 papers)
  3. Sailik Sengupta (24 papers)
  4. Leonard Lausen (12 papers)
  5. Sheng Zha (25 papers)
  6. Saab Mansour (32 papers)
Citations (3)