A Comparative Study of Pretrained Language Models on Thai Social Text Categorization (1912.01580v2)
Abstract: The ever-growing volume of data of user-generated content on social media provides a nearly unlimited corpus of unlabeled data even in languages where resources are scarce. In this paper, we demonstrate that state-of-the-art results on two Thai social text categorization tasks can be realized by pretraining a LLM on a large noisy Thai social media corpus of over 1.26 billion tokens and later fine-tuned on the downstream classification tasks. Due to the linguistically noisy and domain-specific nature of the content, our unique data preprocessing steps designed for Thai social media were utilized to ease the training comprehension of the model. We compared four modern LLMs: ULMFiT, ELMo with biLSTM, OpenAI GPT, and BERT. We systematically compared the models across different dimensions including speed of pretraining and fine-tuning, perplexity, downstream classification benchmarks, and performance in limited pretraining data.
- Thanapapas Horsuwan (1 paper)
- Kasidis Kanwatchara (1 paper)
- Peerapon Vateekul (4 papers)
- Boonserm Kijsirikul (15 papers)