Papers
Topics
Authors
Recent
2000 character limit reached

Formal Test of No Complementarities

Updated 28 October 2025
  • Formal tests of no complementarities are rigorous procedures that assess if observed data patterns arise without interdependent factors, confirming model additivity.
  • These methods employ inequality constraints, moment inequalities, minimax likelihood ratios, and cycle-based estimators to handle measurement noise and endogenous components.
  • The frameworks deliver actionable, statistically robust insights under minimal assumptions, facilitating validation across quantum physics, economics, and network interactions.

A formal test of "no complementarities" determines whether observed patterns in data, measurements, or economic models can be explained without invoking any complementarity between factors, goods, or observables. Across domains—quantum measurement, consumption choice, incomplete structural models, and two-sided interactions—recent research has developed rigorous procedures for operationally testing and identifying the absence or presence of complementarities. These frameworks employ inequalities, robust likelihood principles, nonparametric identification, and cycle-based statistics, ensuring the test results are valid under minimal assumptions and often accommodate latent or endogenous components.

1. Inequality-Based Tests for Non-Complementarity in Observed Statistics

Classical models of quantum measurement are a canonical setting for the direct test of complementarity. The procedure rests on the assumption that joint sharp values exist for otherwise incompatible observables, such as the Pauli matrices σX\sigma_X and %%%%1%%%%. One constructs observed statistics from unsharp (noisy) joint measurements:

p~(x,z)=14(1+x x‾+z z‾+xz xz‾)\tilde{p}(x, z) = \frac{1}{4} \left( 1 + x\,\overline{x} + z\,\overline{z} + xz\,\overline{xz} \right)

where x‾\overline{x} and z‾\overline{z} are mean values, and xz‾\overline{xz} is their correlation. Inverting the effects of measurement noise via known gammas (γX,γZ,γXZ\gamma_X, \gamma_Z, \gamma_{XZ}), one reconstructs a hypothetical joint distribution:

pΛ(x′,z′)=14(1+x′x‾γX+z′z‾γZ+x′z′xz‾γXγZ)p_\Lambda(x', z') = \frac{1}{4} \left( 1 + x' \frac{\overline{x}}{\gamma_X} + z' \frac{\overline{z}}{\gamma_Z} + x'z' \frac{\overline{xz}}{\gamma_X \gamma_Z} \right)

This distribution must be non-negative for all x′,z′x',z' to be classically valid, resulting in four linear inequalities, which may be compactly summarized:

1−∣x‾γX−z‾γZ∣≥xz‾γXγZ≥∣x‾γX+z‾γZ∣−11 - \left| \frac{\overline{x}}{\gamma_X} - \frac{\overline{z}}{\gamma_Z} \right| \geq \frac{\overline{xz}}{\gamma_X\gamma_Z} \geq \left| \frac{\overline{x}}{\gamma_X} + \frac{\overline{z}}{\gamma_Z} \right| - 1

Experimental violation of these inequalities signals the impossibility of constructing a joint classical probability distribution for both observables, thus confirming quantum complementarity (Masa et al., 2021). This test is both necessary and sufficient, and its validity does not depend on notions of entanglement or nonlocality.

2. Semiparametric Tests for Complementarity in Multinomial Choice Models

In empirical economics, formal testing for absence of complementarities between two goods is established via a multinomial panel choice model with bundles. Utility is specified for each choice:

uiAt=XiAt′β0+αiA+ϵiAt uiBt=XiBt′β0+αiB+ϵiBt uiABt=uiAt+uiBt+Γit\begin{aligned} u_{iAt} &= X_{iAt}'\beta_0 + \alpha_{iA} + \epsilon_{iAt} \ u_{iBt} &= X_{iBt}'\beta_0 + \alpha_{iB} + \epsilon_{iBt} \ u_{iABt} &= u_{iAt} + u_{iBt} + \Gamma_{it} \end{aligned}

where Γit\Gamma_{it} measures incremental utility from joint consumption (complementarity). Formal testing leverages changes in demand following observable covariate shifts. The critical identification and testing step is through conditional moment inequalities:

$E \left[ \xi_{s, t}^{1}(x_{s}, x_{t}\mid z) \big(\mathbbm{1}\{Y_{is}\in D_{\ell}\} - \mathbbm{1}\{Y_{it}\in D_{\ell}\} \big) \mid x_s, x_t, z \right] \geq 0$

where ξs,t1\xi_{s, t}^{1} detects simultaneous increases in indices for both goods, and DℓD_\ell defines choices containing good ℓ\ell. Observation of demand decrease for one good when both indices improve is direct evidence against complementarity (i.e., presence of substitution). This approach is robust to endogenous covariates and arbitrary error structure, requiring only stationarity of unobserved shocks (Wang, 2023).

3. Robust Likelihood Ratio Tests in Incomplete Economic Models

When models do not dictate a unique likelihood—such as games with multiple equilibria—formal testing turns to robust minimax likelihood ratio (LR) tests based on least favorable pairs (LFP) of distributions. The procedure designs the level-α\alpha test maximizing the minimum power across all permissible distributions under the null and alternative hypotheses:

Reject H0 if Λ(s)>C,where Λ(s)=dQ1dQ0(s)\text{Reject } H_0 \text{ if }\Lambda(s) > C,\quad \text{where }\Lambda(s) = \frac{dQ_1}{dQ_0}(s)

Identification of the LFPs (Q0,Q1)(Q_0, Q_1) and sharp identifying restrictions (νθ(A)≤P(A)≤νθ∗(A)\nu_\theta(A) \leq P(A) \leq \nu_\theta^*(A) for event AA) reduces the test design to a convex program. For repeated experiments, the LFPs are product measures, yielding asymptotically normal test statistics and achieving minimax optimality in local power for composite or set-identified hypotheses. Application to strategic interaction in discrete games demonstrates the procedure: a rejection is triggered by excess single-entry outcomes, indicative of strategic non-neutrality (Kaido et al., 2019).

4. Formal Testing of Complementarities in Two-Sided Interaction Networks

The Tukey model provides a parsimonious representation of two-sided interaction outcomes:

yij=αi+ψj+β0αiψj+ηijy_{ij} = \alpha_i + \psi_j + \beta_0 \alpha_i \psi_j + \eta_{ij}

with β0\beta_0 as the complementarity parameter. Identification relies on the network containing at least one "informative 4-cycle"—a closed path with two agents per side, all with distinct latent productivities. The cycle-based estimator for β0\beta_0 aggregates across LL cycles:

β^L,π=− 1L∑ℓ=1LΔ^1,ℓ,πℓ1L∑ℓ=1LΔ^2,ℓ,πℓ\hat{\beta}_{L, \pi} = -\,\frac{ \frac{1}{L}\sum_{\ell=1}^L \hat{\Delta}_{1, \ell, \pi_\ell} }{ \frac{1}{L}\sum_{\ell=1}^L \hat{\Delta}_{2, \ell, \pi_\ell} }

where labeling πℓ\pi_\ell uses exogenous instruments to order agents within cycles. Under mild conditions, this estimator is consistent and asymptotically normal:

L(β^L,π−β0)σu,L/(μLcπ)→dN(0,1)\frac{ \sqrt{L} ( \hat{\beta}_{L,\pi} - \beta_0 ) }{ \sigma_{u, L} / ( \mu_L c_\pi ) } \to_d N(0,1)

The formal test of no complementarities (modularity) is:

ϕL,z(TL,z,γ)=1{∣TL,z∣≥cγ}\phi_{L,z}(T_{L,z}, \gamma) = 1\left\{ |T_{L,z}| \geq c_\gamma \right\}

with TL,zT_{L,z} built from estimated cycle differences and variance. This approach is robust to network sparsity and holds provided labelings are constructed from external instruments. The Tukey model is identified under much weaker network conditions than more flexible interaction structures (Crippa, 27 Oct 2025).

5. Identification, Empirical Relevance, and Implementation

Across these methodologies, identification requirements vary—ranging from network connectivity and informative cycles in two-sided interactions, to sharp moment inequalities in semiparametric choice models, to tractable convex programs in robust LR tests. Introduction of instruments, error stationarity, and network properties are empirically observable and testable prerequisites.

Practical implementation steps include:

  • Measurement or estimation of conditional means/correlations (quantum measurement, economic panel data)
  • Nonparametric estimation of choice probabilities (semiparametric models)
  • Identification and use of informative cycles with valid external instruments (two-sided networks)
  • Formulation and solution of convex optimization programs for LFPs (robust LR tests)

Monte Carlo experiments, empirical illustrations in assignment settings, and applications to discrete entry games validate these formal tests as robust, practically executable, and interpretable under conditions encountered in real data.

6. Contextual Significance and Limitations

Formal tests of no complementarities illuminate fundamental constraints in model-based inference and signal where modular (additive, non-interactive) structures suffice. These tests enable sharper economic interpretation, operational diagnostics in quantum inference, and robust inference on strategic interactions. However, their usability depends on satisfaction of key identification conditions (network structure, stationarity, instrument validity), and their conclusions are limited to the model-specific notion of complementarity adopted.

A plausible implication is that, in applied studies, careful attention to identification trade-offs and instrument selection is critical for valid inference. Furthermore, these methods delineate the non-parametric boundaries within which robust inference can be drawn, even when classical completeness or error exogeneity assumptions are absent.


Domain Formal Test Methodology Identification Key
Quantum Measurement Inequality on (inverted) statistics Non-negativity for all joint sharp values; violation ⇔\Leftrightarrow no classical joint distribution
Multinomial Choice Moment inequalities Panel variation; stationarity; no distributional assumption on errors
Incomplete Models Robust LR with LFPs Convex program under sharp identifying restrictions
Two-Sided Interaction Cycle-based estimator and test Existence of informative cycles; valid instrument-based labeling

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Formal Test of No Complementarities.