Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude? (2107.09648v1)

Published 20 Jul 2021 in cs.CL, cs.AI, and cs.LG

Abstract: Despite being designed for performance rather than cognitive plausibility, transformer LLMs have been found to be better at predicting metrics used to assess human language comprehension than LLMs with other architectures, such as recurrent neural networks. Based on how well they predict the N400, a neural signal associated with processing difficulty, we propose and provide evidence for one possible explanation - their predictions are affected by the preceding context in a way analogous to the effect of semantic facilitation in humans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. James A. Michaelov (13 papers)
  2. Megan D. Bardolph (1 paper)
  3. Seana Coulson (3 papers)
  4. Benjamin K. Bergen (31 papers)
Citations (22)