Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Narrative Processing in Large Language Models (LLMs): Using GPT4 to test BERT (2405.02024v1)

Published 3 May 2024 in cs.CL and cs.AI
Analyzing Narrative Processing in Large Language Models (LLMs): Using GPT4 to test BERT

Abstract: The ability to transmit and receive complex information via language is unique to humans and is the basis of traditions, culture and versatile social interactions. Through the disruptive introduction of transformer based LLMs humans are not the only entity to "understand" and produce language any more. In the present study, we have performed the first steps to use LLMs as a model to understand fundamental mechanisms of language processing in neural networks, in order to make predictions and generate hypotheses on how the human brain does language processing. Thus, we have used ChatGPT to generate seven different stylistic variations of ten different narratives (Aesop's fables). We used these stories as input for the open source LLM BERT and have analyzed the activation patterns of the hidden units of BERT using multi-dimensional scaling and cluster analysis. We found that the activation vectors of the hidden units cluster according to stylistic variations in earlier layers of BERT (1) than narrative content (4-5). Despite the fact that BERT consists of 12 identical building blocks that are stacked and trained on large text corpora, the different layers perform different tasks. This is a very useful model of the human brain, where self-similar structures, i.e. different areas of the cerebral cortex, can have different functions and are therefore well suited to processing language in a very efficient way. The proposed approach has the potential to open the black box of LLMs on the one hand, and might be a further step to unravel the neural processes underlying human language processing and cognition in general.

The paper "Analyzing Narrative Processing in LLMs: Using GPT4 to test BERT" explores the intricate mechanisms of language processing within LLMs, particularly focusing on the distinct roles played by different layers in BERT. This paper aims to leverage the capabilities of LLMs, such as ChatGPT and BERT, to draw parallels and hypotheses about human brain functions related to language processing.

Objectives and Methodology

The central objective of the paper was twofold:

  1. To use LLMs as a model to elucidate fundamental mechanisms of language processing in artificial neural networks.
  2. To make informed predictions and generate hypotheses about how similar processes might occur in the human brain.

To achieve these goals, the authors utilized ChatGPT to create seven stylistically varied versions of ten narratives, selected from Aesop's fables. These generated stories were then inputted into BERT, an open-source LLM, to analyze the activation patterns across its hidden units. Specifically, the activation vectors of BERT’s hidden units were examined using multi-dimensional scaling and cluster analysis.

Key Findings

The paper yielded several significant findings:

  1. Layer-wise Functional Differentiation: The results indicated that the activation vectors in BERT's layers clustered according to stylistic variations in the earlier layers (Layer 1), while clustering by narrative content was observed in the middle layers (Layers 4-5). This suggests that different layers in BERT are specialized for different aspects of language processing—earlier layers handle style, and intermediate layers address content.
  2. Functional Analogies to the Human Brain: Despite BERT being composed of 12 identical building blocks, each layer demonstrated a specialization for distinct tasks after training on large text corpora. This observation is analogous to the human cerebral cortex, where different regions, though structurally similar, perform varied functions. This specialization within BERT’s architecture might offer insights into the efficiency of human language processing mechanisms.

Implications and Future Directions

The paper presents compelling evidence that different layers within LLMs like BERT can be seen as models for understanding how the human brain processes language. The methodology employed in this paper, specifically the use of stylistic variations and narrative content to probe the layers of BERT, provides a novel lens for investigating the "black box" of LLMs.

The findings could serve as a foundational step toward a deeper understanding of neural language processing both in artificial systems and the human brain. The approach used in this paper holds promise for future research aimed at unraveling the neural processes underlying human language cognition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Patrick Krauss (40 papers)
  2. Jannik Hösch (1 paper)
  3. Claus Metzner (29 papers)
  4. Andreas Maier (394 papers)
  5. Peter Uhrig (5 papers)
  6. Achim Schilling (34 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets