Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Czert -- Czech BERT-like Model for Language Representation (2103.13031v3)

Published 24 Mar 2021 in cs.CL

Abstract: This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pre-trained and fine-tuned models freely for the research community.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jakub Sido (8 papers)
  2. Ondřej Pražák (11 papers)
  3. Pavel Přibáň (7 papers)
  4. Jan Pašek (2 papers)
  5. Michal Seják (3 papers)
  6. Miloslav Konopík (8 papers)
Citations (38)

Summary

We haven't generated a summary for this paper yet.