2000 character limit reached
JABER and SABER: Junior and Senior Arabic BERt (2112.04329v3)
Published 8 Dec 2021 in cs.CL
Abstract: Language-specific pre-trained models have proven to be more accurate than multilingual ones in a monolingual evaluation setting, Arabic is no exception. However, we found that previously released Arabic BERT models were significantly under-trained. In this technical report, we present JABER and SABER, Junior and Senior Arabic BERt respectively, our pre-trained LLM prototypes dedicated for Arabic. We conduct an empirical study to systematically evaluate the performance of models across a diverse set of existing Arabic NLU tasks. Experimental results show that JABER and SABER achieve state-of-the-art performances on ALUE, a new benchmark for Arabic Language Understanding Evaluation, as well as on a well-established NER benchmark.
- Abbas Ghaddar (18 papers)
- Yimeng Wu (8 papers)
- Ahmad Rashid (24 papers)
- Khalil Bibi (6 papers)
- Mehdi Rezagholizadeh (78 papers)
- Chao Xing (11 papers)
- Yasheng Wang (91 papers)
- Duan Xinyu (2 papers)
- Zhefeng Wang (39 papers)
- Baoxing Huai (28 papers)
- Xin Jiang (242 papers)
- Qun Liu (230 papers)
- Philippe Langlais (23 papers)