Formal Test of No Complementarities
- Formal tests of no complementarities are rigorous procedures that assess if observed data patterns arise without interdependent factors, confirming model additivity.
- These methods employ inequality constraints, moment inequalities, minimax likelihood ratios, and cycle-based estimators to handle measurement noise and endogenous components.
- The frameworks deliver actionable, statistically robust insights under minimal assumptions, facilitating validation across quantum physics, economics, and network interactions.
A formal test of "no complementarities" determines whether observed patterns in data, measurements, or economic models can be explained without invoking any complementarity between factors, goods, or observables. Across domains—quantum measurement, consumption choice, incomplete structural models, and two-sided interactions—recent research has developed rigorous procedures for operationally testing and identifying the absence or presence of complementarities. These frameworks employ inequalities, robust likelihood principles, nonparametric identification, and cycle-based statistics, ensuring the test results are valid under minimal assumptions and often accommodate latent or endogenous components.
1. Inequality-Based Tests for Non-Complementarity in Observed Statistics
Classical models of quantum measurement are a canonical setting for the direct test of complementarity. The procedure rests on the assumption that joint sharp values exist for otherwise incompatible observables, such as the Pauli matrices and %%%%1%%%%. One constructs observed statistics from unsharp (noisy) joint measurements:
where and are mean values, and is their correlation. Inverting the effects of measurement noise via known gammas (), one reconstructs a hypothetical joint distribution:
This distribution must be non-negative for all to be classically valid, resulting in four linear inequalities, which may be compactly summarized:
Experimental violation of these inequalities signals the impossibility of constructing a joint classical probability distribution for both observables, thus confirming quantum complementarity (Masa et al., 2021). This test is both necessary and sufficient, and its validity does not depend on notions of entanglement or nonlocality.
2. Semiparametric Tests for Complementarity in Multinomial Choice Models
In empirical economics, formal testing for absence of complementarities between two goods is established via a multinomial panel choice model with bundles. Utility is specified for each choice:
where measures incremental utility from joint consumption (complementarity). Formal testing leverages changes in demand following observable covariate shifts. The critical identification and testing step is through conditional moment inequalities:
$E \left[ \xi_{s, t}^{1}(x_{s}, x_{t}\mid z) \big(\mathbbm{1}\{Y_{is}\in D_{\ell}\} - \mathbbm{1}\{Y_{it}\in D_{\ell}\} \big) \mid x_s, x_t, z \right] \geq 0$
where detects simultaneous increases in indices for both goods, and defines choices containing good . Observation of demand decrease for one good when both indices improve is direct evidence against complementarity (i.e., presence of substitution). This approach is robust to endogenous covariates and arbitrary error structure, requiring only stationarity of unobserved shocks (Wang, 2023).
3. Robust Likelihood Ratio Tests in Incomplete Economic Models
When models do not dictate a unique likelihood—such as games with multiple equilibria—formal testing turns to robust minimax likelihood ratio (LR) tests based on least favorable pairs (LFP) of distributions. The procedure designs the level- test maximizing the minimum power across all permissible distributions under the null and alternative hypotheses:
Identification of the LFPs and sharp identifying restrictions ( for event ) reduces the test design to a convex program. For repeated experiments, the LFPs are product measures, yielding asymptotically normal test statistics and achieving minimax optimality in local power for composite or set-identified hypotheses. Application to strategic interaction in discrete games demonstrates the procedure: a rejection is triggered by excess single-entry outcomes, indicative of strategic non-neutrality (Kaido et al., 2019).
4. Formal Testing of Complementarities in Two-Sided Interaction Networks
The Tukey model provides a parsimonious representation of two-sided interaction outcomes:
with as the complementarity parameter. Identification relies on the network containing at least one "informative 4-cycle"—a closed path with two agents per side, all with distinct latent productivities. The cycle-based estimator for aggregates across cycles:
where labeling uses exogenous instruments to order agents within cycles. Under mild conditions, this estimator is consistent and asymptotically normal:
The formal test of no complementarities (modularity) is:
with built from estimated cycle differences and variance. This approach is robust to network sparsity and holds provided labelings are constructed from external instruments. The Tukey model is identified under much weaker network conditions than more flexible interaction structures (Crippa, 27 Oct 2025).
5. Identification, Empirical Relevance, and Implementation
Across these methodologies, identification requirements vary—ranging from network connectivity and informative cycles in two-sided interactions, to sharp moment inequalities in semiparametric choice models, to tractable convex programs in robust LR tests. Introduction of instruments, error stationarity, and network properties are empirically observable and testable prerequisites.
Practical implementation steps include:
- Measurement or estimation of conditional means/correlations (quantum measurement, economic panel data)
- Nonparametric estimation of choice probabilities (semiparametric models)
- Identification and use of informative cycles with valid external instruments (two-sided networks)
- Formulation and solution of convex optimization programs for LFPs (robust LR tests)
Monte Carlo experiments, empirical illustrations in assignment settings, and applications to discrete entry games validate these formal tests as robust, practically executable, and interpretable under conditions encountered in real data.
6. Contextual Significance and Limitations
Formal tests of no complementarities illuminate fundamental constraints in model-based inference and signal where modular (additive, non-interactive) structures suffice. These tests enable sharper economic interpretation, operational diagnostics in quantum inference, and robust inference on strategic interactions. However, their usability depends on satisfaction of key identification conditions (network structure, stationarity, instrument validity), and their conclusions are limited to the model-specific notion of complementarity adopted.
A plausible implication is that, in applied studies, careful attention to identification trade-offs and instrument selection is critical for valid inference. Furthermore, these methods delineate the non-parametric boundaries within which robust inference can be drawn, even when classical completeness or error exogeneity assumptions are absent.
| Domain | Formal Test Methodology | Identification Key |
|---|---|---|
| Quantum Measurement | Inequality on (inverted) statistics | Non-negativity for all joint sharp values; violation no classical joint distribution |
| Multinomial Choice | Moment inequalities | Panel variation; stationarity; no distributional assumption on errors |
| Incomplete Models | Robust LR with LFPs | Convex program under sharp identifying restrictions |
| Two-Sided Interaction | Cycle-based estimator and test | Existence of informative cycles; valid instrument-based labeling |