2000 character limit reached
Czert -- Czech BERT-like Model for Language Representation (2103.13031v3)
Published 24 Mar 2021 in cs.CL
Abstract: This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 9 out of 11 datasets. In addition, we establish the new state-of-the-art results on nine datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pre-trained and fine-tuned models freely for the research community.
- Jakub Sido (8 papers)
- Ondřej Pražák (11 papers)
- Pavel Přibáň (7 papers)
- Jan Pašek (2 papers)
- Michal Seják (3 papers)
- Miloslav Konopík (8 papers)