Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human Attention during Goal-directed Reading Comprehension Relies on Task Optimization (2107.05799v2)

Published 13 Jul 2021 in cs.CL and cs.AI

Abstract: The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, i.e., reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task. We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task. Eye-tracking further reveals that readers separately attend to basic text features and question-relevant information during first-pass reading and rereading, respectively. Similarly, text features and question relevance separately modulate attention weights in shallow and deep DNN layers. Furthermore, when readers scan a passage without a question in mind, their reading time is predicted by DNNs optimized for a word prediction task. Therefore, attention during real-world reading can be interpreted as the consequence of task optimization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jiajie Zou (5 papers)
  2. Yuran Zhang (7 papers)
  3. Jialu Li (53 papers)
  4. Xing Tian (5 papers)
  5. Nai Ding (15 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.