How Good are Commercial Large Language Models on African Languages? (2305.06530v1)
Abstract: Recent advancements in NLP has led to the proliferation of large pretrained LLMs. These models have been shown to yield good performance, using in-context learning, even on unseen tasks and languages. They have also been exposed as commercial APIs as a form of language-model-as-a-service, with great adoption. However, their performance on African languages is largely unknown. We present a preliminary analysis of commercial LLMs on two tasks (machine translation and text classification) across eight African languages, spanning different language families and geographical areas. Our results suggest that commercial LLMs produce below-par performance on African languages. We also find that they perform better on text classification than machine translation. In general, our findings present a call-to-action to ensure African languages are well represented in commercial LLMs, given their growing popularity.
- Jessica Ojo (6 papers)
- Kelechi Ogueji (14 papers)