Papers
Topics
Authors
Recent
Search
2000 character limit reached

General Probabilistic Theories (GPTs)

Updated 12 April 2026
  • General Probabilistic Theories (GPTs) are a unifying framework defined by convex state spaces and effect sets that capture operational probabilities across classical, quantum, and alternative theories.
  • They enable systematic exploration of generalized physical theories by relaxing the no-restriction hypothesis and applying techniques like self-dualization to model intrinsic noise and alternative correlations.
  • GPTs facilitate practical analysis of composite systems and resource theories by characterizing nonlocality, contextuality, and optimal state discrimination through convex optimization methods.

General Probabilistic Theories (GPTs) are a broad formal framework for describing physical systems, states, measurements, and transformations grounded only in operationally accessible probabilities. By abstracting the convex-geometry at the core of both classical and quantum probability, GPTs enable systematic exploration of alternative and generalized physical theories, encompassing not only quantum and classical mechanics but also "foil theories" such as boxworld and toy models. The GPT paradigm characterizes physical theories in terms of convex state spaces, effect sets, and measurement statistics, providing a unified and flexible language for analyzing operational phenomena, information-processing, resource theories, and foundational questions in quantum theory.

1. Foundations and Structure of GPTs

A single-system GPT is defined by a finite-dimensional real vector space VV (dim nn), a compact convex normalized "state space" ΩV\Omega \subset V, a convex "effect set" EVE \subset V^* (the dual space), and a distinguished "unit effect" uEu \in E satisfying u(ω)=1u(\omega) = 1 for all ωΩ\omega \in \Omega. Effects eEe \in E are linear functionals e:V[0,1]e: V \to [0,1], and probabilities are computed via p(eω)=e,ωp(e|\omega) = \langle e, \omega \rangle. The unnormalized state cone is nn0, and the dual cone is nn1 (Janotta et al., 2013, Plávala, 2021).

In the standard GPT formulation, the no-restriction hypothesis (NRH) is often assumed, so that all mathematically possible probability-valued linear functionals compatible with the state space are physical: nn2, ensuring a duality between the spaces of states and effects. Classical and quantum theories both satisfy NRH; their state spaces (simplex and Bloch ball, respectively) and effect sets are fully determined by this duality (Janotta et al., 2013).

States form convex sets, allowing probabilistic mixtures, and pure states are the extremal points. Effects are the operationally accessible measurement events, with measurements defined as sets nn3 such that nn4. Transformations are positive linear maps that preserve normalization (Plávala, 2021).

2. Relaxing the No-Restriction Hypothesis and Generalizations

Relaxing the NRH leads to frameworks where the effect set nn5 is a convex, closed subset of nn6, specified independently of the state cone nn7. The only constraints are: nn8, nn9, closure under convex combinations and reversible symmetry. In such "restricted" GPTs, the effect cone ΩV\Omega \subset V0 need not span the full dual cone, and duality between states and effects breaks down (Janotta et al., 2013).

This generalization supports the construction of new theories, such as those with intrinsic noise—by "shrinking" extremal effects toward ΩV\Omega \subset V1, all measurement outcomes become noisy, and no pure state can be identified perfectly. For instance, in the “gbit” of boxworld, one can define noisy effects ΩV\Omega \subset V2 that lie strictly within the NRH-octahedron except the trivial effects (Janotta et al., 2013).

Another important generalization is self-dualization, where states and effects are mapped to coincide under an appropriate inner product induced by a linear transformation ΩV\Omega \subset V3, i.e., making ΩV\Omega \subset V4 equal to the effect cone. This process yields theories where the state and effect structures are isomorphic (strong self-duality), as in the geometric truncation realized for polygons (Janotta et al., 2013).

3. Composite Systems, Tensor Products, and Nonlocality

Composing subsystems in GPTs is non-unique, and composite state spaces can be defined between two extreme cases:

  • Minimal tensor product ΩV\Omega \subset V5 (only separable states).
  • Maximal tensor product ΩV\Omega \subset V6, which permits all no-signaling correlations compatible with the local structures.

In the presence of restricted effects, one defines a generalized maximal tensor product, with joint states

ΩV\Omega \subset V7

ensuring that all conditional states obtained by local measurements remain valid (Janotta et al., 2013).

This framework accommodates theories displaying maximal nonlocal correlations such as boxworld, where the state space is a cube and the effect set is unrestricted, leading to PR-box correlations saturating the algebraic CHSH-bound of 4. Conversely, restricting the effect cone to that of Spekkens’ toy theory (the octahedron) yields local theories, as every measurement outcome is compatible with a local hidden variable (classically embeddable) model (Janotta et al., 2013).

In self-dualized theories (and quantum theory), the CHSH parameter for maximally entangled states is bounded by the Tsirelson bound, ΩV\Omega \subset V8. This result is a direct consequence of strong self-duality and the geometry of the state and effect sets (Janotta et al., 2013).

4. Contextuality, Classicality, and Resource Theories

GPTs form a landscape in which contextual and noncontextual models can be precisely delineated. A simplicial GPT, where the state space is a simplex, is equivalent to a classical probabilistic theory and is always noncontextual. The paper (Shahandeh, 2019) establishes that within GPTs assumed to satisfy NRH, the three properties—NRH, ontological noncontextuality, and the existence of multiple nonrefinable measurements—are mutually incompatible except for classical (simplicial) theories.

Subtheories (sub-GPTs) that violate NRH can always be embedded as subtheories of some larger NRH GPT. A subGPT admits a noncontextual (classical) model if and only if it can be embedded in a simplicial GPT of the same dimension (Shahandeh, 2019).

Resource-theoretic treatments of contextuality for GPTs formalize a monotonic preorder of embeddability into classical systems under the free addition of classical subsystems and exact GPT embeddings, quantifying contextuality via new monotones such as “classical excess” (the minimal simulation error in embedding a GPT in a classical simplex) and parity-oblivious multiplexing success probabilities (Catani et al., 2024).

5. Operational and Experimental Aspects

Self-consistent GPT tomography can be applied to experimental data, fitting observed outcome matrices with convex polytopes representing the accessible state and effect spaces. This approach, as shown by (Mazurek et al., 2017), enables the derivation of tight bounds on the possible deviation of Nature from quantum theory. For instance, experiments on single-photon polarization yield realized GPT state polytopes that are within a ΩV\Omega \subset V9 volume deviation from the Bloch ball. Furthermore, the framework quantitatively bounds the possible violation of nonlocality or contextuality inequalities: in the reported experiment, the maximal violation of the CHSH inequality can exceed the quantum prediction by at most EVE \subset V^*0 (Mazurek et al., 2017).

Accessible GPT fragments, more fine-grained than full GPT descriptions, formalize the restrictions imposed by finite experimental capabilities. The concept of cone equivalence between such fragments isolates operational features (e.g., classical embeddability) that depend solely on the geometry of the state/effect cones. Fundamental results include that incompatibility and freedom-of-choice assumptions are not required to witness the failure of noncontextuality, and that detector inefficiency cannot “restore” classicality (there is no detector loophole in generalized-noncontextuality tests) (Selby et al., 2021).

6. Applications in Information Processing and Physical Principles

The GPT framework supports an array of foundational and operational analyses:

  • State Discrimination: Optimal state discrimination in GPTs admits a general convex optimization formulation; generic features include nonuniqueness of optimal measurements and cases where "no measurement" is optimal, mirroring well-known quantum results but requiring only convexity of state and effect sets (Bae et al., 2017).
  • Measurement Incompatibility: Equivalence between measurement compatibility, positivity of associated linear maps, inclusion of generalized spectrahedra, and norm bounds (injective and projective tensor norms) reveals the geometric roots of incompatibility across classical, quantum, and beyond-quantum theories (Bluhm et al., 2020).
  • Thermodynamics: Under appropriate spectrality and projectivity conditions, a spectral entropy function associated with unique decompositions of states into perfectly distinguishable pure states emerges. This entropy satisfies majorization and concavity properties, with thermodynamic meaning generalizing von Neumann’s arguments to GPT systems (Barnum et al., 2015).
  • Channel Compatibility and Composite Structure: Incompatibility notions for channels depend sensitively on the choice of composite effect cones. “Min-tensor compatibility” is strictly weaker than quantum compatibility; for quantum channels, almost-quantum compatibility coincides with the quantum set, but is more broadly distinguished in other GPTs (Yamada et al., 2024).

GPTs provide a rigorous testbed for reconstructing quantum theory from axioms, identifying the operational signature of quantum phenomena, and exploring the boundary between quantum, classical, and "superquantum" theories.

7. Recent Developments and Open Directions

Research continues on several interrelated fronts:

  • Teleportation-Stable Theories: Demanding that a theory’s CHSH value remains stable under arbitrary rounds of entanglement swapping (teleportation) yields a sharp representation-theoretic classification. Only seven GPT families meet this stringent criterion, with quantum mechanics occupying a unique point corresponding to the

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to General Probabilistic Theories (GPTs).