Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond English: Evaluating LLMs for Arabic Grammatical Error Correction (2312.08400v1)

Published 13 Dec 2023 in cs.CL and cs.AI

Abstract: LLMs finetuned to follow human instruction have recently exhibited significant capabilities in various English NLP tasks. However, their performance in grammatical error correction (GEC), especially on languages other than English, remains significantly unexplored. In this work, we evaluate the abilities of instruction finetuned LLMs in Arabic GEC, a complex task due to Arabic's rich morphology. Our findings suggest that various prompting methods, coupled with (in-context) few-shot learning, demonstrate considerable effectiveness, with GPT-4 achieving up to $65.49$ F${1}$ score under expert prompting (approximately $5$ points higher than our established baseline). Despite these positive results, we find that instruction finetuned models, regardless of their size, are still outperformed by fully finetuned ones, even if they are significantly smaller in size. This disparity highlights substantial room for improvements for LLMs. Inspired by methods used in low-resource machine translation, we also develop a method exploiting synthetic data that significantly outperforms previous models on two standard Arabic benchmarks. Our best model achieves a new SOTA on Arabic GEC, with $73.29$ and $73.26$ F${1}$ on the 2014 and 2015 QALB datasets, respectively, compared to peer-reviewed published baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sang Yun Kwon (6 papers)
  2. Gagan Bhatia (12 papers)
  3. El Moatez Billah Nagoudi (31 papers)
  4. Muhammad Abdul-Mageed (102 papers)
Citations (15)