Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Give your Text Representation Models some Love: the Case for Basque (2004.00033v2)

Published 31 Mar 2020 in cs.CL

Abstract: Word embeddings and pre-trained LLMs allow to build rich representations of text and have enabled improvements across most NLP tasks. Unfortunately they are very expensive to train, and many small companies and research groups tend to use models that have been pre-trained and made available by third parties, rather than building their own. This is suboptimal as, for many languages, the models have been trained on smaller (or lower quality) corpora. In addition, monolingual pre-trained models for non-English languages are not always available. At best, models for those languages are included in multilingual versions, where each language shares the quota of substrings and parameters with the rest of the languages. This is particularly true for smaller languages such as Basque. In this paper we show that a number of monolingual models (FastText word embeddings, FLAIR and BERT LLMs) trained with larger Basque corpora produce much better results than publicly available versions in downstream NLP tasks, including topic classification, sentiment classification, PoS tagging and NER. This work sets a new state-of-the-art in those tasks for Basque. All benchmarks and models used in this work are publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Rodrigo Agerri (41 papers)
  2. IƱaki San Vicente (4 papers)
  3. Jon Ander Campos (20 papers)
  4. Ander Barrena (7 papers)
  5. Xabier Saralegi (5 papers)
  6. Aitor Soroa (29 papers)
  7. Eneko Agirre (53 papers)
Citations (59)

Summary

We haven't generated a summary for this paper yet.