Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RoBERTuito: a pre-trained language model for social media text in Spanish (2111.09453v3)

Published 18 Nov 2021 in cs.CL and cs.AI

Abstract: Since BERT appeared, Transformer LLMs and transfer learning have become state-of-the-art for Natural Language Understanding tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks. However, for languages other than English such models are not widely available. In this work, we present RoBERTuito, a pre-trained LLM for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained LLMs in Spanish. In addition to this, our model achieves top results for some English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and has also competitive performance against monolingual models in English tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Juan Manuel Pérez (10 papers)
  2. Laura Alonso Alemany (6 papers)
  3. Franco Luque (4 papers)
  4. Damián A. Furman (2 papers)
Citations (93)

Summary

We haven't generated a summary for this paper yet.