Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Correcting Arabic Soft Spelling Mistakes using BiLSTM-based Machine Learning (2108.01141v1)

Published 2 Aug 2021 in cs.CL and cs.LG

Abstract: Soft spelling errors are a class of spelling mistakes that is widespread among native Arabic speakers and foreign learners alike. Some of these errors are typographical in nature. They occur due to orthographic variations of some Arabic letters and the complex rules that dictate their correct usage. Many people forgo these rules, and given the identical phonetic sounds, they often confuse such letters. In this paper, we propose a bidirectional long short-term memory network that corrects this class of errors. We develop, train, evaluate, and compare a set of BiLSTM networks. We approach the spelling correction problem at the character level. We handle Arabic texts from both classical and modern standard Arabic. We treat the problem as a one-to-one sequence transcription problem. Since the soft Arabic errors class encompasses omission and addition mistakes, to preserve the one-to-one sequence transcription, we propose a simple low-resource yet effective technique that maintains the one-to-one sequencing and avoids using a costly encoder-decoder architecture. We train the BiLSTM models to correct the spelling mistakes using transformed input and stochastic error injection approaches. We recommend a configuration that has two BiLSTM layers, uses the dropout regularization, and is trained using the latter training approach with error injection rate of 40%. The best model corrects 96.4% of the injected errors and achieves a low character error rate of 1.28% on a real test set of soft spelling mistakes.

Citations (9)

Summary

We haven't generated a summary for this paper yet.