Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use (2211.08192v1)

Published 15 Nov 2022 in cs.CL and cs.LG

Abstract: Large transformer-based LLMs, e.g. BERT and GPT-3, outperform previous architectures on most natural language processing tasks. Such LLMs are first pre-trained on gigantic corpora of text and later used as base-model for finetuning on a particular task. Since the pre-training step is usually not repeated, base models are not up-to-date with the latest information. In this paper, we update RobBERT, a RoBERTa-based state-of-the-art Dutch LLM, which was trained in 2019. First, the tokenizer of RobBERT is updated to include new high-frequent tokens present in the latest Dutch OSCAR corpus, e.g. corona-related words. Then we further pre-train the RobBERT model using this dataset. To evaluate if our new model is a plug-in replacement for RobBERT, we introduce two additional criteria based on concept drift of existing tokens and alignment for novel tokens.We found that for certain language tasks this update results in a significant performance increase. These results highlight the benefit of continually updating a LLM to account for evolving language use.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Pieter Delobelle (15 papers)
  2. Thomas Winters (10 papers)
  3. Bettina Berendt (20 papers)
Citations (5)