Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Human-like object concept representations emerge naturally in multimodal large language models (2407.01067v1)

Published 1 Jul 2024 in cs.AI, cs.CL, cs.CV, cs.HC, and cs.LG

Abstract: The conceptualization and categorization of natural objects in the human mind have long intrigued cognitive scientists and neuroscientists, offering crucial insights into human perception and cognition. Recently, the rapid development of LLMs has raised the attractive question of whether these models can also develop human-like object representations through exposure to vast amounts of linguistic and multimodal data. In this study, we combined behavioral and neuroimaging analysis methods to uncover how the object concept representations in LLMs correlate with those of humans. By collecting large-scale datasets of 4.7 million triplet judgments from LLM and Multimodal LLM (MLLM), we were able to derive low-dimensional embeddings that capture the underlying similarity structure of 1,854 natural objects. The resulting 66-dimensional embeddings were found to be highly stable and predictive, and exhibited semantic clustering akin to human mental representations. Interestingly, the interpretability of the dimensions underlying these embeddings suggests that LLM and MLLM have developed human-like conceptual representations of natural objects. Further analysis demonstrated strong alignment between the identified model embeddings and neural activity patterns in many functionally defined brain ROIs (e.g., EBA, PPA, RSC and FFA). This provides compelling evidence that the object representations in LLMs, while not identical to those in the human, share fundamental commonalities that reflect key schemas of human conceptual knowledge. This study advances our understanding of machine intelligence and informs the development of more human-like artificial cognitive systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Changde Du (25 papers)
  2. Kaicheng Fu (5 papers)
  3. Bincheng Wen (1 paper)
  4. Yi Sun (146 papers)
  5. Jie Peng (100 papers)
  6. Wei Wei (424 papers)
  7. Ying Gao (49 papers)
  8. Shengpei Wang (3 papers)
  9. Chuncheng Zhang (6 papers)
  10. Jinpeng Li (67 papers)
  11. Shuang Qiu (46 papers)
  12. Le Chang (13 papers)
  13. Huiguang He (26 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com