Nomic Exclusion Theories
- Nomic exclusion theories are frameworks that impose formal, law-like restrictions to rule out configurations, correlations, or states across diverse domains.
- They operationally manifest in logical models, quantum exclusivity, Bayesian consistency, and resource-theoretic tasks, demonstrating the role of structured constraints.
- Theories of nomic exclusion provide actionable insights into quantum phenomena, logical expressibility, and metaphysical boundaries by systematically delineating what is fundamentally impossible.
Nomic exclusion theories comprise a set of frameworks and principles in logic, physics, and philosophy wherein law-like (“nomic”) constraints systematically prohibit certain states, correlations, or measurement configurations from being physically or epistemically possible. Rather than ad hoc or statistical exclusion, the denial of possibility is grounded in the formal structure of laws—whether logical, mathematical, physical, or probabilistic. These theories appear across domains: in logic (via team semantics and dependency atoms), in quantum foundations (as constraints derived from exclusivity principles, Bayesian norms, or resource-theoretic exclusion), in metaphysics (addressing the nature and vagueness of laws), and in interpretations of quantum mechanics (as responses to nonlocality, statistical independence, and superdeterminism). The scope and impact of nomic exclusion is both foundational and structural, shaping permissible states and the epistemic limits of inquiry.
1. Logical Foundations: Team Semantics and Exclusion Dependencies
In formal logic, nomic exclusion is exemplified by the introduction of exclusion atoms into the language of first-order logic, yielding “inclusion/exclusion logic” (I/E logic) (Galliani, 2011). Under team semantics, formulas are evaluated not on pointwise assignments but on sets of variable assignments (“teams”). An exclusion atom t₁ | t₂ asserts that no value assigned to t₁ in any team member appears for t₂ in any other, enforcing a non-local constraint. The semantic content of I/E logic is rigorously linked to existential second-order logic: every I/E formula is equivalent to an existential second-order formula over teams, and NP properties of teams are thereby expressible. These logics model imperfect information and database-theoretic dependencies—equality generating and tuple generating dependencies—via the uniformity conditions in their game-theoretic semantics. The exclusion atom operationalizes a nomic restriction, prohibiting team-properties not encapsulated by NP second-order existential expressibility.
2. Quantum Exclusivity Principles and Bell-Type Correlations
In the foundations of quantum theory, the exclusivity (E) principle provides a nomic constraint on event probabilities (Cabello, 2014). If a collection of events is pairwise exclusive—sharply distinguishable—then their probabilities sum to at most 1. This yields the quantum Tsirelson bound for Bell-type inequalities: additional structural assumptions (factorization for independent experiments, existence of extra sharp observables) combined with the E principle restrict the sum S in the CHSH scenario to S ≤ 2 + √2, precisely the quantum upper bound. This exclusion of extremal correlations (beyond quantum mechanics yet consistent with no-signaling) distinguishes quantum theory among probabilistic frameworks, showing that nomic exclusion (here instantiated by the E principle) is a key ingredient in demarcating the physical boundary of quantum contextuality and nonlocality.
3. Bayesian Consistency as Nomic Exclusion
General Bayesian theories leverage rational probabilistic assignments and update protocols to reconstruct quantum theory via the principle of exclusivity (Chiribella et al., 2019). Ideal experiments—sequentially refinable measurements—imply that for any collection of pairwise exclusive outcomes (potentially across experiments), the probabilities assigned must obey ∑ₙ p(Eₙ | Aₙ β) ≤ 1 for all agent belief states β. This organizational constraint is strictly enforced by Bayesian consistency conditions (conditional probability, forward and backward consistency) when ideality is maintained. Within quantum mechanics, these ideal experiments correspond exactly to projective measurements, and the exclusivity principle guarantees that the quantum set of correlations (as in Bell and Kochen–Specker scenarios) is normatively captured. Thus, nomic exclusion emerges as internal coherence in probability assignments, with the laws of updating and measurement enforcing the exclusion.
4. Exclusion Tasks and Quantum Resource Measures
In resource-theoretic settings, exclusion manifests operationally via anti-distinguishability tasks—input–output games where the aim is to output anything except the input (Uola et al., 2019). All quantum resources (states, measurements, channels) that are not free provide a strict performance advantage in these tasks. The convex weight, 𝒲_F(D), measures how much a resourceful device D outperforms any free device F. The operational meaning is direct: the relative advantage in an exclusion task is quantified by 1 − 𝒲_F. The measure is bounded and characterized by convex decompositions using conic programming and minimal dilations; structural features, such as the form of POVMs, yield analytical bounds on the convex weight. This quantifies nomic exclusion in the sense that the laws of quantum resource theory, together with the operational structure, exclude certain performances as unattainable without genuine quantum resources.
5. Nomic Vagueness and Exclusion via Law Structure
Nomic exclusion extends to the metaphysical level, including the possibility that fundamental laws themselves may be vague (Chen, 2020). “Fundamental nomic vagueness” denotes law-like structures where the domain of lawful worlds lacks sharp boundaries. In thermodynamic or cosmological contexts (e.g., the Past Hypothesis), this vagueness is expressed by fuzzy macrostates (“S₀-ish”), challenging the standard mathematical treatment that requires crisp sets. The introduction of density-matrix realism and the Initial Projection Hypothesis resolves this by embedding the law directly in dynamics, rendering initial conditions both exact and traceable. The exclusion here is twofold: vague laws can exclude borderline worlds, while exact but arbitrary laws may fail meaningful traceability unless realised in the micro-dynamical ontology. This has repercussions for which phenomena are nomologically admitted or excluded by theory.
6. Quantum Foundations: Ontic/Nomic Exclusion in Wavefunction Models
Hidden-variable theories in quantum foundations reveal the necessity of nomic constraints on allowed models (Drezet, 2021). Anomic models (ψ_A), where the measurement response is independent of the wavefunction, face strong no-go theorems (PBR, Hardy). Under restricted ontic indifference and preparation independence, such models lead to internal contradictions—measurement outcomes become impossible for overlapping supports, violating normalization. Conversely, nomic models (ψ_N), where the wavefunction is dynamically relevant, are compatible with quantum predictions. The exclusion operates at the law level: only models where the wavefunction enters into the laws (as a nomic variable) are permitted; others are structurally ruled out by the constraints, illustrating “nomic exclusion” of non-lawlike (anomic) quantum interpretations.
7. Superdeterminism and Statistical Nomic Exclusion
In the debate over Bell’s theorem and local causality, nomic exclusion theories emerge as an entire category of superdeterministic models (Waegell et al., 27 Sep 2025). Here, violation of statistical independence (SI)—that is, ρ(λ | X) ≠ ρ(λ)—is not achieved by fine-tuned initial conditions or statistical flukes, but by the laws themselves preventing certain measurement-setting–state combinations. Explicit “goblin” toy models instantiate law-like agents that dynamically preclude incompatible settings or outcomes. The philosophical consequences are significant: nomic exclusion undermines freedom of choice in compatibilist accounts (experimenters’ apparent choices are constrained by nomological structure), introduces a conspiratorial character (law-like constraints mimic random selection yet hide certain counterfactuals), and raises challenges for the scientific method in terms of sampling and representation. This approach excludes certain arrangements as physically impossible by nomic principle rather than by statistical happenstance.
8. Epistemic Horizons and Exclusion in Deterministic Settings
Nomic exclusion is further instantiated in deterministic “nomic toy theories” modeling agents and systems as symplectic dynamical objects (Fankhauser et al., 25 Jun 2024). Here, the epistemic horizon—impossibility of learning all system properties simultaneously—arises as a consequence of the allowed physical interactions. Only variables Z whose Poisson bracket vanishes (Z Ω Zᵀ = 0) can be measured jointly and exactly; incompatible observables are excluded for epistemic agents embedded in the theory. This operationalizes measurement uncertainty as nomic exclusion: the theory makes simultaneous knowledge of noncommuting observables impossible, not by fiat but by the deterministic laws governing interaction and subject–object inseparability. Such exclusion is structurally identical to quantum uncertainty, even when the underlying theory is classical.
9. Nomic Exclusion in Foundations, Logic, and Mathematics
Formal mathematical exclusions, such as those articulated in the rejection of the law of excluded middle (LEM) in constructive mathematics and topos theory, have consequences for the logical structure of quantum theory (Esfahanian et al., 2023). Boolean frameworks enforcing LEM cannot consistently describe quantum superposition or accommodate nonzero infinitesimals (via the Kock–Lawvere axiom), leading to forced dichotomies that are structurally excluded in constructive settings. Topos-theoretic approaches, with intuitionistic logic, allow the nomic exclusion of classical reasoning, thus enabling richer modeling of indeterminacy and superposition. In quantum logic, nomic exclusion supports formulations that prevent certain kinds of classical reasoning about measurement outcomes and states.
10. Constraints, Criteria, and the Simplicity Bubble Effect
The epistemic selection of laws via “nomic realism” is often guided by criteria such as simplicity (Abrahão et al., 2023). However, simplicity as an exclusion criterion is subject to the “simplicity bubble effect”: local optimization for simplicity can trap theorists into favoring laws that are not globally representative or adequate. This phenomenon, analogous to overfitting in machine learning, shows that nomic exclusion based purely on simplicity may systematically exclude legitimate candidates when empirical underdetermination prevails (e.g., quantum collapse vs. many-worlds interpretations). Thus, nomic exclusion theories must employ varied and robust criteria to avoid misidentifying the nomic structure of reality.
Nomic exclusion theories encapsulate the structural, operational, and epistemic limits imposed by law-like principles across logical, physical, probabilistic, and metaphysical systems. Whether manifesting as logical exclusion atoms, quantum exclusivity rules, Bayesian rational normativity, resource-theoretic exclusion, metaphysical vagueness, forbidden hidden-variable models, constraints on statistical independence, epistemic horizons, constructive logic, or selection effects via simplicity, the function of nomic exclusion is to delineate—by virtue of law—what configurations, correlations, or states are not merely unobserved but systematically impossible or unrepresentable within the theory.