LaoPLM: Pre-trained Language Models for Lao (2110.05896v3)
Abstract: Trained on the large corpus, pre-trained LLMs (PLMs) can capture different levels of concepts in context and hence generate universal language representations. They can benefit multiple downstream NLP tasks. Although PTMs have been widely used in most NLP applications, especially for high-resource languages such as English, it is under-represented in Lao NLP research. Previous work on Lao has been hampered by the lack of annotated datasets and the sparsity of language resources. In this work, we construct a text classification dataset to alleviate the resource-scare situation of the Lao language. We additionally present the first transformer-based PTMs for Lao with four versions: BERT-small, BERT-base, ELECTRA-small and ELECTRA-base, and evaluate it over two downstream tasks: part-of-speech tagging and text classification. Experiments demonstrate the effectiveness of our Lao models. We will release our models and datasets to the community, hoping to facilitate the future development of Lao NLP applications.
- Nankai Lin (21 papers)
- Yingwen Fu (8 papers)
- Chuwei Chen (1 paper)
- Ziyu Yang (6 papers)
- Shengyi Jiang (24 papers)