Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items (1808.10627v1)

Published 31 Aug 2018 in cs.CL

Abstract: In this paper, we attempt to link the inner workings of a neural LLM to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural LLM has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jaap Jumelet (25 papers)
  2. Dieuwke Hupkes (49 papers)
Citations (60)

Summary

We haven't generated a summary for this paper yet.