Logical Omniscience and Bounded Rationality
- Logical Omniscience and Bounded Rationality is a field that contrasts the ideal of complete logical deduction with the practical limits of computation and information.
- It employs tools from complexity theory, PAC-learning, and modal logics to model how agents process information under resource constraints.
- The framework offers practical insights into equilibrium computation, iterative reasoning, and empirical testing in multi-agent systems.
Logical omniscience is the property of an agent or formal system that, upon knowing a set of propositions, is supposed to know all their logical consequences—including every deducible fact, however complex. This idealization is entrenched in classical models of epistemic logic and rational decision theory, where agents are assumed to compute expected utilities by resolving all relevant mathematical and logical questions, and to reason counterfactually about possible future decisions. Bounded rationality, in contrast, reflects the reality that agents—biological or artificial—face intrinsic computational and informational limitations, precluding such exhaustive inference or planning. Recent developments apply precise complexity and resource-bounded frameworks, as well as refined modal logics, to move from the unattainable ideal of logical omniscience toward formal models that capture the actual epistemic and decision-making capacities of bounded agents (Oesterheld et al., 2023, Fourny, 2018, Aaronson, 2011).
1. Foundations: Logical Omniscience and Classical Rationality
In classical Bayesian decision theory (BDT), a rational agent facing a decision problem with utilities is postulated to simultaneously:
- Decide every mathematical or logical question relevant to the options (e.g., "Is the th bit of equal to 1?"),
- Compute intractable or NP-hard optimizations,
- Consistently reason about counterfactuals, even when the environment encodes or predicts the agent’s own behavior.
Formal epistemic logic encodes logical omniscience via closure under entailment: if and are known, is always known as well. In game theory, this manifests in solution concepts dependent on agents knowing the full consequence structure of the game and their opponents' reasoning (Oesterheld et al., 2023, Fourny, 2018, Aaronson, 2011).
However, no physically realizable agent can perform these idealized tasks. The logical omniscience assumption leads to paradoxes in settings involving self-reference (such as agents facing payoffs dependent on their own code), as in the Simplified Adversarial Offer (SAO), and leaves classical theory normatively inadequate for real-world computation or learning (Oesterheld et al., 2023).
2. Resource-Bounded Models: Complexity-Theoretic Approaches
Complexity theory provides tools to formalize knowledge, inference, and rationality without presupposing logical omniscience:
- Cobham’s Axioms define the class of feasibly computable (polynomial-time) functions as the minimal closure under primitive operations, composition, and bounded recursion (Aaronson, 2011). Identifying an agent’s "know-how" with ensures closure under practical construction rules but not exhaustive consequence finding.
- PAC-Learning ties sample complexity for inductive inference to the effective complexity of the hypothesis class, measured by VC-dimension or representation size. Resource-bounded learners generalize reliably only within restricted hypothesis spaces for which generalization is tractable (Aaronson, 2011).
- Equilibrium Computation for general (normal-form) games is PPAD-complete; thus, even agents endowed with full game structure and unlimited memory cannot guarantee to compute Nash equilibria in polynomial time (Aaronson, 2011).
This framework permits closure properties (composition of procedures, bounded induction) but falls short of universal deductive ability, separating "implicit" (algorithmic) and "explicit" (premise-based) knowledge (Aaronson, 2011).
3. Logics of Bounded Reasoning: Kripke Semantics, Levels, and Games
Refined modal logics employ Kripke frames to weaken logical omniscience and characterize rational behavior at varying reasoning depths:
- Kripke Frames encompass sets of possible worlds (including logically or even impossibly possible ones), relations for epistemic and logical accessibility, and world-variable maps to positions in strategy spaces (Fourny, 2018).
- Level- Logical Omniscience stratifies logical accessibility into levels:
- Level 1 (0): non-normal worlds ("everything possible, nothing necessary").
- Level 1 (2): normal worlds with the property that all accessible deviations land in level 3 worlds; logical omniscience "fades" along counterfactuals (Fourny, 2018).
In game-theoretic applications, this structure yields:
- Necessary rationality: agents do not choose self-deviations that are logically possible and payoff-improving.
- Necessary factual omniscience: all agents know the global state (strategy profile).
- Limitation: it is not possible to simultaneously maintain perfect logical omniscience, necessary rationality, and factual omniscience under nontrivial counterfactuals—forming an impossibility triangle that necessitates quantization of logical omniscience (Fourny, 2018).
The structure of levels is exactly mirrored in iterative deletion solution concepts, culminating in the Perfectly Transparent Equilibrium (PTE): a profile is the PTE if it survives all levels of iterated elimination and is characterized in the Kripke model by arbitrarily high levels of logical omniscience (Fourny, 2018).
4. Formal Theories of Bounded Rational Agents
A precise, learning-theoretic model of bounded rationality dispenses with logical omniscience:
- Boundedly Rational Inductive Agent (BRIA): Defined by a sequence 4 of choices and estimates, evaluated against a family 5 of efficiently computable hypotheses. Requirements:
- No-overestimation: The agent’s average self-estimates do not systematically exceed realized rewards:
6
- Coverage: If a hypothesis 7 outpromises the agent (offers higher payoff estimates) infinitely often, then actual performance refutes it on test sets, ensuring the agent at least matches the performance of viable hypotheses (Oesterheld et al., 2023).
Key properties:
- Existence of computable BRIAs for any computably enumerable, efficiently computable hypothesis class.
- BRIAs guarantee learning of payoff lower bounds for "easy options" (efficiently computable arms with guaranteed rewards).
- In multi-agent repeated games, pairs of BRIAs can converge to any strictly individually rational correlated profile, reproducing folk theorem properties under computability constraints (Oesterheld et al., 2023).
The auction-with-allowance protocol operationalizes BRIA testing, tracking expert "wealth" to allocate exploration and verify outpromising hypotheses.
5. Avoidance of Logical Omniscience: Proof Techniques and Conceptual Impact
Mechanisms to avoid logical omniscience in bounded rationality frameworks include:
- Resource-bounded exploration: Only hypotheses within given computational limits (e.g., 8 time) are tested; agents never presume to solve arbitrary logical or mathematical problems (Oesterheld et al., 2023).
- No counterfactual payoffs: Rewards are observed only for actually chosen actions, not for hypothetical alternatives; this blocks paradoxes from self-prediction and counterfactual reasoning.
- Testing and rejection: Outpromising hypotheses are empirically audited, and only those that survive repeated testing without systematic overestimation are followed, avoiding Dutch-book vulnerabilities (Oesterheld et al., 2023).
Kripke-model approaches encode degradation of logical omniscience under counterfactuals, aligning epistemic capacity with levels of iterated rationality elimination (Fourny, 2018). Complexity-theoretic boundaries make practical limits on the scope of reasoning and proof, recasting cases like the grue/bleen induction puzzle and the Turing Test in asymptotic rather than absolute terms (Aaronson, 2011).
6. Implications and Philosophical Significance
Computational and modal frameworks for bounded rationality and weakened logical omniscience clarify deep puzzles in epistemology and decision theory:
- The transition from "knowing that" (logically closed sets) to "knowing how" (feasible algorithmic procedures) captures the observed limits of human and artificial agents without denying closure properties where feasible (Aaronson, 2011).
- PAC learning theory grounds inductive rationality, relating model class complexity to sample requirements and resolving long-standing issues about simplicity, induction, and representation dependence (Aaronson, 2011).
- Hardness results for equilibrium and market computation give new substance to bounded rationality in economic models and game-theoretic reasoning (Aaronson, 2011).
- Modern proof theory (interactive, probabilistic, and quantum proofs) demonstrates that "being convinced" is itself resource-constrained, with implications for cryptographic trust and the foundations of mathematics (Aaronson, 2011).
A plausible implication is that working models of epistemic and strategic behavior must explicitly encode resource bounds and stratify logical accessibility, moving away from idealization and toward empirically and descriptively adequate formalism.
References
- A Theory of Bounded Inductive Rationality (Oesterheld et al., 2023)
- Kripke Semantics of the Perfectly Transparent Equilibrium (Fourny, 2018)
- Why Philosophers Should Care About Computational Complexity (Aaronson, 2011)