Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Text Normalization for Low-Resource Languages of Africa (2103.15845v1)

Published 29 Mar 2021 in cs.CL

Abstract: Training data for machine learning models can come from many different sources, which can be of dubious quality. For resource-rich languages like English, there is a lot of data available, so we can afford to throw out the dubious data. For low-resource languages where there is much less data available, we can't necessarily afford to throw out the dubious data, in case we end up with a training set which is too small to train a model. In this study, we examine the effects of text normalization and data set quality for a set of low-resource languages of Africa -- Afrikaans, Amharic, Hausa, Igbo, Malagasy, Somali, Swahili, and Zulu. We describe our text normalizer which we built in the Pynini framework, a Python library for finite state transducers, and our experiments in training LLMs for African languages using the Natural Language Toolkit (NLTK), an open-source Python library for NLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Andrew Zupon (2 papers)
  2. Evan Crew (2 papers)
  3. Sandy Ritchie (4 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.