From Evidence to Belief: A Bayesian Epistemology Approach to Language Models (2504.19622v1)
Abstract: This paper investigates the knowledge of LLMs from the perspective of Bayesian epistemology. We explore how LLMs adjust their confidence and responses when presented with evidence with varying levels of informativeness and reliability. To study these properties, we create a dataset with various types of evidence and analyze LLMs' responses and confidence using verbalized confidence, token probability, and sampling. We observed that LLMs do not consistently follow Bayesian epistemology: LLMs follow the Bayesian confirmation assumption well with true evidence but fail to adhere to other Bayesian assumptions when encountering different evidence types. Also, we demonstrated that LLMs can exhibit high confidence when given strong evidence, but this does not always guarantee high accuracy. Our analysis also reveals that LLMs are biased toward golden evidence and show varying performance depending on the degree of irrelevance, helping explain why they deviate from Bayesian assumptions.