Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What do RNN Language Models Learn about Filler-Gap Dependencies? (1809.00042v1)

Published 31 Aug 2018 in cs.CL

Abstract: RNN LLMs have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn. Here we investigate whether state-of-the-art RNN LLMs represent long-distance filler-gap dependencies and constraints on them. Examining RNN behavior on experimentally controlled sentences designed to expose filler-gap dependencies, we show that RNNs can represent the relationship in multiple syntactic positions and over large spans of text. Furthermore, we show that RNNs learn a subset of the known restrictions on filler-gap dependencies, known as island constraints: RNNs show evidence for wh-islands, adjunct islands, and complex NP islands. These studies demonstrates that state-of-the-art RNN models are able to learn and generalize about empty syntactic positions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ethan Wilcox (24 papers)
  2. Roger Levy (43 papers)
  3. Takashi Morita (12 papers)
  4. Richard Futrell (29 papers)
Citations (158)

Summary

We haven't generated a summary for this paper yet.