TunBERT: Pretrained Contextualized Text Representation for Tunisian Dialect (2111.13138v1)
Abstract: Pretrained contextualized text representation models learn an effective representation of a natural language to make it machine understandable. After the breakthrough of the attention mechanism, a new generation of pretrained models have been proposed achieving good performances since the introduction of the Transformer. Bidirectional Encoder Representations from Transformers (BERT) has become the state-of-the-art model for language understanding. Despite their success, most of the available models have been trained on Indo-European languages however similar research for under-represented languages and dialects remains sparse. In this paper, we investigate the feasibility of training monolingual Transformer-based LLMs for under represented languages, with a specific focus on the Tunisian dialect. We evaluate our LLM on sentiment analysis task, dialect identification task and reading comprehension question-answering task. We show that the use of noisy web crawled data instead of structured data (Wikipedia, articles, etc.) is more convenient for such non-standardized language. Moreover, results indicate that a relatively small web crawled dataset leads to performances that are as good as those obtained using larger datasets. Finally, our best performing TunBERT model reaches or improves the state-of-the-art in all three downstream tasks. We release the TunBERT pretrained model and the datasets used for fine-tuning.
- Abir Messaoudi (7 papers)
- Ahmed Cheikhrouhou (1 paper)
- Hatem Haddad (8 papers)
- Nourchene Ferchichi (1 paper)
- Moez BenHajhmida (1 paper)
- Abir Korched (1 paper)
- Malek Naski (2 papers)
- Faten Ghriss (1 paper)
- Amine Kerkeni (4 papers)