Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Conceptual Representation Require Embodiment? Insights From Large Language Models (2305.19103v3)

Published 30 May 2023 in cs.CL

Abstract: To what extent can language alone give rise to complex concepts, or is embodied experience essential? Recent advancements in LLMs offer fresh perspectives on this question. Although LLMs are trained on restricted modalities, they exhibit human-like performance in diverse psychological tasks. Our study compared representations of 4,442 lexical concepts between humans and ChatGPTs (GPT-3.5 and GPT-4) across multiple dimensions, including five key domains: emotion, salience, mental visualization, sensory, and motor experience. We identify two main findings: 1) Both models strongly align with human representations in non-sensorimotor domains but lag in sensory and motor areas, with GPT-4 outperforming GPT-3.5; 2) GPT-4's gains are associated with its additional visual learning, which also appears to benefit related dimensions like haptics and imageability. These results highlight the limitations of language in isolation, and that the integration of diverse modalities of inputs leads to a more human-like conceptual representation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qihui Xu (4 papers)
  2. Yingying Peng (59 papers)
  3. Samuel A. Nastase (3 papers)
  4. Martin Chodorow (1 paper)
  5. Minghua Wu (4 papers)
  6. Ping Li (421 papers)
Citations (7)