Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Divergences between Language Models and Human Brains (2311.09308v2)

Published 15 Nov 2023 in cs.CL, cs.AI, cs.LG, and q-bio.NC

Abstract: Do machines and humans process language in similar ways? Recent research has hinted in the affirmative, finding that brain signals can be effectively predicted using the internal representations of LLMs (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories. Using a data-driven approach, we identify two domains that are not captured well by LMs: social/emotional intelligence and physical commonsense. We then validate these domains with human behavioral experiments and show that fine-tuning LMs on these domains can improve their alignment with human brain responses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yuchen Zhou (38 papers)
  2. Emmy Liu (17 papers)
  3. Graham Neubig (342 papers)
  4. Michael J. Tarr (20 papers)
  5. Leila Wehbe (15 papers)

Summary

We haven't generated a summary for this paper yet.