Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"? (2311.09325v2)

Published 15 Nov 2023 in cs.CL and cs.AI

Abstract: A wide body of evidence shows that human language processing difficulty is predicted by the information-theoretic measure surprisal, a word's negative log probability in context. However, it is still unclear how to best estimate these probabilities needed for predicting human processing difficulty -- while a long-standing belief held that models with lower perplexity would provide more accurate estimates of word predictability, and therefore lead to better reading time predictions, recent work has shown that for very large models, psycholinguistic predictive power decreases. One reason could be that LLMs might be more confident of their predictions than humans, because they have had exposure to several magnitudes more data. In this paper, we test what effect temperature-scaling of LLM predictions has on surprisal estimates and their predictive power of reading times of English texts. Firstly, we show that calibration of LLMs typically improves with model size, i.e. poorer calibration cannot account for poorer fit to reading times. Secondly, we find that temperature-scaling probabilities lead to a systematically better fit to reading times (up to 89% improvement in delta log likelihood), across several reading time corpora. Finally, we show that this improvement in fit is chiefly driven by words that are composed of multiple subword tokens.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tong Liu (316 papers)
  2. Iza Ċ krjanec (3 papers)
  3. Vera Demberg (48 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.