Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evaluating German Transformer Language Models with Syntactic Agreement Tests (2007.03765v1)

Published 7 Jul 2020 in cs.CL

Abstract: Pre-trained transformer LLMs (TLMs) have recently refashioned NLP: Most state-of-the-art NLP models now operate on top of TLMs to benefit from contextualization and knowledge induction. To explain their success, the scientific community conducted numerous analyses. Besides other methods, syntactic agreement tests were utilized to analyse TLMs. Most of the studies were conducted for the English language, however. In this work, we analyse German TLMs. To this end, we design numerous agreement tasks, some of which consider peculiarities of the German language. Our experimental results show that state-of-the-art German TLMs generally perform well on agreement tasks, but we also identify and discuss syntactic structures that push them to their limits.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Karolina Zaczynska (3 papers)
  2. Nils Feldhus (18 papers)
  3. Robert Schwarzenberg (12 papers)
  4. Aleksandra Gabryszak (7 papers)
  5. Sebastian Möller (77 papers)
Citations (4)