Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluation of contextual embeddings on less-resourced languages (2107.10614v1)

Published 22 Jul 2021 in cs.CL

Abstract: The current dominance of deep neural networks in natural language processing is based on contextual embeddings such as ELMo, BERT, and BERT derivatives. Most existing work focuses on English; in contrast, we present here the first multilingual empirical comparison of two ELMo and several monolingual and multilingual BERT models using 14 tasks in nine languages. In monolingual settings, our analysis shows that monolingual BERT models generally dominate, with a few exceptions such as the dependency parsing task, where they are not competitive with ELMo models trained on large corpora. In cross-lingual settings, BERT models trained on only a few languages mostly do best, closely followed by massively multilingual BERT models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Aleš Žagar (5 papers)
  2. Carlos S. Armendariz (2 papers)
  3. Andraž Repar (3 papers)
  4. Senja Pollak (37 papers)
  5. Matthew Purver (32 papers)
  6. Marko Robnik-Šikonja (39 papers)
  7. Matej Ulčar (8 papers)
Citations (9)