Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LaoPLM: Pre-trained Language Models for Lao (2110.05896v3)

Published 12 Oct 2021 in cs.CL

Abstract: Trained on the large corpus, pre-trained LLMs (PLMs) can capture different levels of concepts in context and hence generate universal language representations. They can benefit multiple downstream NLP tasks. Although PTMs have been widely used in most NLP applications, especially for high-resource languages such as English, it is under-represented in Lao NLP research. Previous work on Lao has been hampered by the lack of annotated datasets and the sparsity of language resources. In this work, we construct a text classification dataset to alleviate the resource-scare situation of the Lao language. We additionally present the first transformer-based PTMs for Lao with four versions: BERT-small, BERT-base, ELECTRA-small and ELECTRA-base, and evaluate it over two downstream tasks: part-of-speech tagging and text classification. Experiments demonstrate the effectiveness of our Lao models. We will release our models and datasets to the community, hoping to facilitate the future development of Lao NLP applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nankai Lin (21 papers)
  2. Yingwen Fu (8 papers)
  3. Chuwei Chen (1 paper)
  4. Ziyu Yang (6 papers)
  5. Shengyi Jiang (24 papers)
Citations (3)