Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models (1909.10579v1)

Published 23 Sep 2019 in cs.CL

Abstract: Neural LLMs (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs' syntactic representational space. We use this technique to demonstrate that LSTM LMs' representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Grusha Prasad (7 papers)
  2. Marten van Schijndel (14 papers)
  3. Tal Linzen (73 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.