Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models (2305.14763v1)

Published 24 May 2023 in cs.CL

Abstract: The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine "intelligence". Recently, many anecdotal examples were used to suggest that newer LLMs like ChatGPT and GPT-4 exhibit Neural Theory-of-Mind (N-ToM); however, prior work reached conflicting conclusions regarding those abilities. We investigate the extent of LLMs' N-ToM through an extensive evaluation on 6 tasks and find that while LLMs exhibit certain N-ToM abilities, this behavior is far from being robust. We further examine the factors impacting performance on N-ToM tasks and discover that LLMs struggle with adversarial examples, indicating reliance on shallow heuristics rather than robust ToM abilities. We caution against drawing conclusions from anecdotal examples, limited benchmark testing, and using human-designed psychological tests to evaluate models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Natalie Shapira (2 papers)
  2. Mosh Levy (6 papers)
  3. Seyed Hossein Alavi (6 papers)
  4. Xuhui Zhou (33 papers)
  5. Yejin Choi (287 papers)
  6. Yoav Goldberg (142 papers)
  7. Maarten Sap (86 papers)
  8. Vered Shwartz (49 papers)
Citations (89)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com