Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Overestimation of Syntactic Representationin Neural Language Models (2004.05067v1)

Published 10 Apr 2020 in cs.CL

Abstract: With the advent of powerful neural LLMs over the last few years, research attention has increasingly focused on what aspects of language they represent that make them so successful. Several testing methodologies have been developed to probe models' syntactic representations. One popular method for determining a model's ability to induce syntactic structure trains a model on strings generated according to a template then tests the model's ability to distinguish such strings from superficially similar ones with different syntax. We illustrate a fundamental problem with this approach by reproducing positive results from a paper with two non-syntactic baseline LLMs: an n-gram model and an LSTM model trained on scrambled inputs.

Citations (12)

Summary

We haven't generated a summary for this paper yet.