Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Individual corpora predict fast memory retrieval during reading (2010.10176v1)

Published 20 Oct 2020 in cs.CL and cs.IR

Abstract: The corpus, from which a predictive LLM is trained, can be considered the experience of a semantic system. We recorded everyday reading of two participants for two months on a tablet, generating individual corpus samples of 300/500K tokens. Then we trained word2vec models from individual corpora and a 70 million-sentence newspaper corpus to obtain individual and norm-based long-term memory structure. To test whether individual corpora can make better predictions for a cognitive task of long-term memory retrieval, we generated stimulus materials consisting of 134 sentences with uncorrelated individual and norm-based word probabilities. For the subsequent eye tracking study 1-2 months later, our regression analyses revealed that individual, but not norm-corpus-based word probabilities can account for first-fixation duration and first-pass gaze duration. Word length additionally affected gaze duration and total viewing duration. The results suggest that corpora representative for an individual's longterm memory structure can better explain reading performance than a norm corpus, and that recently acquired information is lexically accessed rapidly.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Markus J. Hofmann (4 papers)
  2. Lara Müller (1 paper)
  3. Andre Rölke (1 paper)
  4. Ralph Radach (3 papers)
  5. Chris Biemann (78 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.