2000 character limit reached
Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization (2303.14588v1)
Published 25 Mar 2023 in cs.CL
Abstract: Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained LLMs to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.