Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing (2310.11166v2)

Published 17 Oct 2023 in cs.CL

Abstract: English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based LLMs for natural language processing tasks. Although Vietnam has approximately 100M people speaking Vietnamese, several pre-trained models, e.g., PhoBERT, ViBERT, and vELECTRA, performed well on general Vietnamese NLP tasks, including POS tagging and named entity recognition. These pre-trained LLMs are still limited to Vietnamese social media tasks. In this paper, we present the first monolingual pre-trained LLM for Vietnamese social media texts, ViSoBERT, which is pre-trained on a large-scale corpus of high-quality and diverse Vietnamese social media texts using XLM-R architecture. Moreover, we explored our pre-trained model on five important natural language downstream tasks on Vietnamese social media texts: emotion recognition, hate speech detection, sentiment analysis, spam reviews detection, and hate speech spans detection. Our experiments demonstrate that ViSoBERT, with far fewer parameters, surpasses the previous state-of-the-art models on multiple Vietnamese social media tasks. Our ViSoBERT model is available only for research purposes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Quoc-Nam Nguyen (7 papers)
  2. Thang Chau Phan (2 papers)
  3. Duc-Vu Nguyen (18 papers)
  4. Kiet Van Nguyen (74 papers)
Citations (7)