Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia (2410.01677v3)

Published 2 Oct 2024 in cs.AI

Abstract: Research into the external behaviors and internal mechanisms of LLMs has shown promise in addressing complex tasks in the physical world. Studies suggest that powerful LLMs, like GPT-4, are beginning to exhibit human-like cognitive abilities, including planning, reasoning, and reflection. In this paper, we introduce a research line and methodology called LLM Psychology, leveraging human psychology experiments to investigate the cognitive behaviors and mechanisms of LLMs. We migrate the Typoglycemia phenomenon from psychology to explore the "mind" of LLMs. Unlike human brains, which rely on context and word patterns to comprehend scrambled text, LLMs use distinct encoding and decoding processes. Through Typoglycemia experiments at the character, word, and sentence levels, we observe: (I) LLMs demonstrate human-like behaviors on a macro scale, such as lower task accuracy and higher token/time consumption; (II) LLMs exhibit varying robustness to scrambled input, making Typoglycemia a benchmark for model evaluation without new datasets; (III) Different task types have varying impacts, with complex logical tasks (e.g., math) being more challenging in scrambled form; (IV) Each LLM has a unique and consistent "cognitive pattern" across tasks, revealing general mechanisms in its psychology process. We provide an in-depth analysis of hidden layers to explain these phenomena, paving the way for future research in LLM Psychology and deeper interpretability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Miao Yu (76 papers)
  2. Junyuan Mao (7 papers)
  3. Guibin Zhang (29 papers)
  4. Jingheng Ye (15 papers)
  5. Junfeng Fang (45 papers)
  6. Aoxiao Zhong (16 papers)
  7. Yang Liu (2253 papers)
  8. Yuxuan Liang (126 papers)
  9. Kun Wang (355 papers)
  10. Qingsong Wen (139 papers)

Summary

We haven't generated a summary for this paper yet.