Mechanisms of Judgment in Large Language Models

Determine how judgment is instantiated and operationalized within large language models, specifying the internal mechanisms that produce evaluative outputs.

Background

The paper argues that LLMs are increasingly embedded in social processes that require evaluative judgments, such as assessing credibility and assisting decision-making. Despite this growing role, the authors emphasize that it remains unresolved how these systems internally produce what appear to be judgments.

By contrasting human and artificial epistemic pipelines and highlighting seven epistemological fault lines, the paper motivates the need to clarify whether and how LLMs instantiate judgment-like processes, given that their generation procedures are characterized as stochastic pattern completion rather than belief formation or epistemic evaluation.

References

A central open question is how judgment itself is instantiated and operationalized in LLMs.

Epistemological Fault Lines Between Human and Artificial Intelligence (2512.19466 - Quattrociocchi et al., 22 Dec 2025) in Section 1 (Introduction)