Can a Multichoice Dataset be Repurposed for Extractive Question Answering? (2404.17342v1)
Abstract: The rapid evolution of NLP has favored major languages such as English, leaving a significant gap for many others due to limited resources. This is especially evident in the context of data annotation, a task whose importance cannot be underestimated, but which is time-consuming and costly. Thus, any dataset for resource-poor languages is precious, in particular when it is task-specific. Here, we explore the feasibility of repurposing existing datasets for a new NLP task: we repurposed the Belebele dataset (Bandarkar et al., 2023), which was designed for multiple-choice question answering (MCQA), to enable extractive QA (EQA) in the style of machine reading comprehension. We present annotation guidelines and a parallel EQA dataset for English and Modern Standard Arabic (MSA). We also present QA evaluation results for several monolingual and cross-lingual QA pairs including English, MSA, and five Arabic dialects. Our aim is to enable others to adapt our approach for the 120+ other language variants in Belebele, many of which are deemed under-resourced. We also conduct a thorough analysis and share our insights from the process, which we hope will contribute to a deeper understanding of the challenges and the opportunities associated with task reformulation in NLP research.
- Teresa Lynn (6 papers)
- Malik H. Altakrori (2 papers)
- Samar Mohamed Magdy (4 papers)
- Rocktim Jyoti Das (10 papers)
- Chenyang Lyu (44 papers)
- Mohamed Nasr (3 papers)
- Younes Samih (11 papers)
- Alham Fikri Aji (94 papers)
- Preslav Nakov (253 papers)
- Shantanu Godbole (4 papers)
- Salim Roukos (41 papers)
- Radu Florian (54 papers)
- Nizar Habash (66 papers)