Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans (2106.03232v2)

Published 6 Jun 2021 in cs.CL

Abstract: We present a targeted, scaled-up comparison of incremental processing in humans and neural LLMs by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena. Human reaction time data comes from a novel online experimental paradigm called the Interpolated Maze task. We compare human reaction times to by-word probabilities for four contemporary LLMs, with different architectures and trained on a range of data set sizes. We find that across many phenomena, both humans and LLMs show increased processing difficulty in ungrammatical sentence regions with human and model `accuracy' scores (a la Marvin and Linzen(2018)) about equal. However, although LLM outputs match humans in direction, we show that models systematically under-predict the difference in magnitude of incremental processing difficulty between grammatical and ungrammatical sentences. Specifically, when models encounter syntactic violations they fail to accurately predict the longer reaction times observed in the human data. These results call into question whether contemporary LLMs are approaching human-like performance for sensitivity to syntactic violations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ethan Gotlieb Wilcox (9 papers)
  2. Pranali Vani (1 paper)
  3. Roger P. Levy (12 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.