Adapting to the Low-Resource Double-Bind: Investigating Low-Compute Methods on Low-Resource African Languages (2303.16985v1)
Abstract: Many NLP tasks make use of massively pre-trained LLMs, which are computationally expensive. However, access to high computational resources added to the issue of data scarcity of African languages constitutes a real barrier to research experiments on these languages. In this work, we explore the applicability of low-compute approaches such as language adapters in the context of this low-resource double-bind. We intend to answer the following question: do language adapters allow those who are doubly bound by data and compute to practically build useful models? Through fine-tuning experiments on African languages, we evaluate their effectiveness as cost-effective approaches to low-resource African NLP. Using solely free compute resources, our results show that language adapters achieve comparable performances to massive pre-trained LLMs which are heavy on computational resources. This opens the door to further experimentation and exploration on full-extent of language adapters capacities.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.