Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Causal Graph in Language Model Rediscovers Cortical Hierarchy in Human Narrative Processing (2311.10431v1)

Published 17 Nov 2023 in cs.CL

Abstract: Understanding how humans process natural language has long been a vital research direction. The field of NLP has recently experienced a surge in the development of powerful LLMs. These models have proven to be invaluable tools for studying another complex system known to process human language: the brain. Previous studies have demonstrated that the features of LLMs can be mapped to fMRI brain activity. This raises the question: is there a commonality between information processing in LLMs and the human brain? To estimate information flow patterns in a LLM, we examined the causal relationships between different layers. Drawing inspiration from the workspace framework for consciousness, we hypothesized that features integrating more information would more accurately predict higher hierarchical brain activity. To validate this hypothesis, we classified LLM features into two categories based on causal network measures: 'low in-degree' and 'high in-degree'. We subsequently compared the brain prediction accuracy maps for these two groups. Our results reveal that the difference in prediction accuracy follows a hierarchical pattern, consistent with the cortical hierarchy map revealed by activity time constants. This finding suggests a parallel between how LLMs and the human brain process linguistic information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Zhengqi He (3 papers)
  2. Taro Toyoizumi (23 papers)

Summary

We haven't generated a summary for this paper yet.