Papers
Topics
Authors
Recent
Search
2000 character limit reached

Pre-training Polish Transformer-based Language Models at Scale

Published 7 Jun 2020 in cs.CL | (2006.04229v2)

Abstract: Transformer-based LLMs are now widely used in NLP. This statement is especially true for English language, in which many pre-trained models utilizing transformer-based architecture have been published in recent years. This has driven forward the state of the art for a variety of standard NLP tasks such as classification, regression, and sequence labeling, as well as text-to-text tasks, such as machine translation, question answering, or summarization. The situation have been different for low-resource languages, such as Polish, however. Although some transformer-based LLMs for Polish are available, none of them have come close to the scale, in terms of corpus size and the number of parameters, of the largest English-LLMs. In this study, we present two LLMs for Polish based on the popular BERT architecture. The larger model was trained on a dataset consisting of over 1 billion polish sentences, or 135GB of raw text. We describe our methodology for collecting the data, preparing the corpus, and pre-training the model. We then evaluate our models on thirteen Polish linguistic tasks, and demonstrate improvements over previous approaches in eleven of them.

Citations (34)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.