Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

I Think, Therefore I Hallucinate: Minds, Machines, and the Art of Being Wrong (2503.05806v1)

Published 4 Mar 2025 in q-bio.NC

Abstract: This theoretical work examines 'hallucinations' in both human cognition and LLMs, comparing how each system can produce perceptions or outputs that deviate from reality. Drawing on neuroscience and machine learning research, we highlight the predictive processes that underlie human and artificial thought. In humans, complex neural mechanisms interpret sensory information under uncertainty, sometimes filling in gaps and creating false perceptions. This inference occurs hierarchically: higher cortical levels send top-down predictions to lower-level regions, while mismatches (prediction errors) propagate upward to refine the model. LLMs, in contrast, rely on auto-regressive modeling of text and can generate erroneous statements in the absence of robust grounding. Despite these different foundations - biological versus computational - the similarities in their predictive architectures help explain why hallucinations occur. We propose that the propensity to generate incorrect or confabulated responses may be an inherent feature of advanced intelligence. In both humans and AI, adaptive predictive processes aim to make sense of incomplete information and anticipate future states, fostering creativity and flexibility, but also introducing the risk of errors. Our analysis illuminates how factors such as feedback, grounding, and error correction affect the likelihood of 'being wrong' in each system. We suggest that mitigating AI hallucinations (e.g., through improved training, post-processing, or knowledge-grounding methods) may also shed light on human cognitive processes, revealing how error-prone predictions can be harnessed for innovation without compromising reliability. By exploring these converging and divergent mechanisms, the paper underscores the broader implications for advancing both AI reliability and scientific understanding of human thought.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)