Low-Resource Cross-Lingual Adaptive Training for Nigerian Pidgin (2307.00382v1)
Abstract: Developing effective spoken language processing systems for low-resource languages poses several challenges due to the lack of parallel data and limited resources for fine-tuning models. In this work, we target on improving upon both text classification and translation of Nigerian Pidgin (Naija) by collecting a large-scale parallel English-Pidgin corpus and further propose a framework of cross-lingual adaptive training that includes both continual and task adaptive training so as to adapt a base pre-trained model to low-resource languages. Our studies show that English pre-trained LLMs serve as a stronger prior than multilingual LLMs on English-Pidgin tasks with up to 2.38 BLEU improvements; and demonstrate that augmenting orthographic data and using task adaptive training with back-translation can have a significant impact on model performance.
- Pin-Jie Lin (10 papers)
- Muhammed Saeed (5 papers)
- Ernie Chang (33 papers)
- Merel Scholman (4 papers)