Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformers in the loop: Polarity in neural models of language (2109.03926v2)

Published 8 Sep 2021 in cs.CL

Abstract: Representation of linguistic phenomena in computational LLMs is typically assessed against the predictions of existing linguistic theories of these phenomena. Using the notion of polarity as a case study, we show that this is not always the most adequate set-up. We probe polarity via so-called 'negative polarity items' (in particular, English 'any') in two pre-trained Transformer-based models (BERT and GPT-2). We show that - at least for polarity - metrics derived from LLMs are more consistent with data from psycholinguistic experiments than linguistic theory predictions. Establishing this allows us to more adequately evaluate the performance of LLMs and also to use LLMs to discover new insights into natural language grammar beyond existing linguistic theories. This work contributes to establishing closer ties between psycholinguistic experiments and experiments with LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lisa Bylinina (7 papers)
  2. Alexey Tikhonov (35 papers)

Summary

We haven't generated a summary for this paper yet.