Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Conceptual structure coheres in human cognition but not in large language models (2304.02754v2)

Published 5 Apr 2023 in cs.AI, cs.CL, and cs.LG

Abstract: Neural network models of language have long been used as a tool for developing hypotheses about conceptual representation in the mind and brain. For many years, such use involved extracting vector-space representations of words and using distances among these to predict or understand human behavior in various semantic tasks. Contemporary LLMs, however, make it possible to interrogate the latent structure of conceptual representations using experimental methods nearly identical to those commonly used with human participants. The current work utilizes three common techniques borrowed from cognitive psychology to estimate and compare the structure of concepts in humans and a suite of LLMs. In humans, we show that conceptual structure is robust to differences in culture, language, and method of estimation. Structures estimated from LLM behavior, while individually fairly consistent with those estimated from human behavior, vary much more depending upon the particular task used to generate responses--across tasks, estimates of conceptual structure from the very same model cohere less with one another than do human structure estimates. These results highlight an important difference between contemporary LLMs and human cognition, with implications for understanding some fundamental limitations of contemporary machine language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Siddharth Suresh (11 papers)
  2. Kushin Mukherjee (9 papers)
  3. Xizheng Yu (4 papers)
  4. Wei-Chun Huang (3 papers)
  5. Lisa Padua (1 paper)
  6. Timothy T Rogers (11 papers)
Citations (6)