Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Evidence to Belief: A Bayesian Epistemology Approach to Language Models (2504.19622v1)

Published 28 Apr 2025 in cs.AI

Abstract: This paper investigates the knowledge of LLMs from the perspective of Bayesian epistemology. We explore how LLMs adjust their confidence and responses when presented with evidence with varying levels of informativeness and reliability. To study these properties, we create a dataset with various types of evidence and analyze LLMs' responses and confidence using verbalized confidence, token probability, and sampling. We observed that LLMs do not consistently follow Bayesian epistemology: LLMs follow the Bayesian confirmation assumption well with true evidence but fail to adhere to other Bayesian assumptions when encountering different evidence types. Also, we demonstrated that LLMs can exhibit high confidence when given strong evidence, but this does not always guarantee high accuracy. Our analysis also reveals that LLMs are biased toward golden evidence and show varying performance depending on the degree of irrelevance, helping explain why they deviate from Bayesian assumptions.

Summary

We haven't generated a summary for this paper yet.