Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding (2205.10687v1)

Published 21 May 2022 in cs.CL

Abstract: There is a growing body of work in recent years to develop pre-trained LLMs (PLMs) for the Arabic language. This work concerns addressing two major problems in existing Arabic PLMs which constraint progress of the Arabic NLU and NLG fields.First, existing Arabic PLMs are not well-explored and their pre-trainig can be improved significantly using a more methodical approach. Second, there is a lack of systematic and reproducible evaluation of these models in the literature. In this work, we revisit both the pre-training and evaluation of Arabic PLMs. In terms of pre-training, we explore improving Arabic LMs from three perspectives: quality of the pre-training data, size of the model, and incorporating character-level information. As a result, we release three new Arabic BERT-style models ( JABER, Char-JABER, and SABER), and two T5-style models (AT5S and AT5B). In terms of evaluation, we conduct a comprehensive empirical study to systematically evaluate the performance of existing state-of-the-art models on ALUE that is a leaderboard-powered benchmark for Arabic NLU tasks, and on a subset of the ARGEN benchmark for Arabic NLG tasks. We show that our models significantly outperform existing Arabic PLMs and achieve a new state-of-the-art performance on discriminative and generative Arabic NLU and NLG tasks. Our models and source code to reproduce of results will be made available shortly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (14)
  1. Abbas Ghaddar (18 papers)
  2. Yimeng Wu (8 papers)
  3. Sunyam Bagga (3 papers)
  4. Ahmad Rashid (24 papers)
  5. Khalil Bibi (6 papers)
  6. Mehdi Rezagholizadeh (78 papers)
  7. Chao Xing (11 papers)
  8. Yasheng Wang (91 papers)
  9. Duan Xinyu (2 papers)
  10. Zhefeng Wang (39 papers)
  11. Baoxing Huai (28 papers)
  12. Xin Jiang (242 papers)
  13. Qun Liu (230 papers)
  14. Philippe Langlais (23 papers)
Citations (6)