bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark (2306.02349v2)
Abstract: We present bgGLUE(Bulgarian General Language Understanding Evaluation), a benchmark for evaluating LLMs on Natural Language Understanding (NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression). We run the first systematic evaluation of pre-trained LLMs for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark. The evaluation results show strong performance on sequence labeling tasks, but there is a lot of room for improvement for tasks that require more complex reasoning. We make bgGLUE publicly available together with the fine-tuning and the evaluation code, as well as a public leaderboard at https://bgglue.github.io/, and we hope that it will enable further advancements in developing NLU models for Bulgarian.
- Momchil Hardalov (23 papers)
- Pepa Atanasova (27 papers)
- Todor Mihaylov (23 papers)
- Galia Angelova (3 papers)
- Kiril Simov (3 papers)
- Petya Osenova (6 papers)
- Ves Stoyanov (15 papers)
- Ivan Koychev (33 papers)
- Preslav Nakov (253 papers)
- Dragomir Radev (98 papers)