Papers
Topics
Authors
Recent
2000 character limit reached

Probability Antimatching Strategy

Updated 10 November 2025
  • Probability antimatching is a method that transforms fractional probabilistic assignments into a lottery of deterministic matchings with optimized worst-case outcomes.
  • It applies to one-sided matching by ensuring a minimum assignment guarantee and extends to stochastic choice through vector reflection across the uniform distribution.
  • The strategy leverages polynomial-time algorithms and column generation to balance efficiency, fairness, and robust performance in various allocation and behavioral models.

The Probability Antimatching Strategy refers to a max-min, or pessimistic, approach for converting a probabilistic assignment into a lottery (randomized mixture) over concrete outcomes such that the worst-case result is optimized under specified constraints. Two central but contextually distinct formulations appear in the literature: (1) in the design of randomized mechanisms for one-sided matching (assignment) problems, as presented by Demeulemeester et al. (Demeulemeester et al., 2021), and (2) as a basis policy for avoidance behavior in stochastic choice environments, formalized as a vector reflection within the behavioral modeling literature (DiBerardino et al., 5 Nov 2025). Each formulation captures a rigorous, symmetry-based notion of “doing the opposite” of matching probabilities, with the underlying goal either to guarantee assignment size or to minimize target-seeking success.

1. Formal Definitions

In one-sided matching, the probability antimatching strategy is formulated as follows:

  • Let there be nn agents (N={1,,n}N=\{1,\dots,n\}) and mm objects (O={o1,,om}O=\{o_1,\dots,o_m\}), with each object ojo_j having quota qjq_j.
  • Each agent ii has strict ordinal preferences over O{}O \cup \{\emptyset\}.
  • A deterministic matching MM is an integer matrix [mi,j][m_{i,j}] indicating assignments, with the agent and object capacity constraints enforced.
  • A probabilistic assignment X=[xi,j]X=[x_{i,j}] obeys the same constraints but allows fractional assignments xi,j[0,1]x_{i,j}\in[0,1], interpreted as assignment probabilities.
  • The expected number of assigned agents is μ(X)=i,jxi,j\mu(X)=\sum_{i,j} x_{i,j}.

The antimatching strategy seeks a decomposition of XX into a lottery on deterministic matchings {Mt}\{M^t\} such that each realized matching has at least kk assigned agents, with kk as large as possible (MD(X)(X) problem). Requiring ex-post Pareto efficiency in each MtM^t yields the MD-SD(X)(X) problem.

In stochastic choice environments (DiBerardino et al., 5 Nov 2025), antimatching is defined for a choice probability vector p=(p1,...,pN)p = (p_1, ..., p_N) over NN options:

  • The “matching” vector is ms=pm_s = p (probability matching).
  • The “antimatching” vector is its Euclidean reflection across the uniform distribution u=(1N,...,1N)u = (\tfrac{1}{N}, ..., \tfrac{1}{N}):

ma=2up=(2Np1, ,2NpN)m_a = 2u - p = (\tfrac{2}{N} - p_1,\ \ldots, \tfrac{2}{N} - p_N)

  • If mam_a falls outside the probability simplex ΔN1\Delta^{N-1}, one projects back to the simplex using either (a) line tracing, or (b) nearest-point Euclidean projection.

2. Maximin Decomposition and Pessimistic Guarantees in Matching

The primary objective in the antimatching problem for assignments is to maximize kk, the minimum size across all support matchings in a lottery for a given probabilistic matrix XX. This converts a potentially fragile expectation guarantee into a robust, worst-case assignment lower bound. The formal maximin decomposition task (MD(X)(X)) constrains decomposition such that every MtM^t in the support assigns at least kk agents, and the decomposition matches XX’s marginals.

  • If ex-post efficiency is required (MD-SD(X)(X)), the support matchings must also be cycle- or Pareto-efficient, precluding any post hoc trades that could strictly improve some agents’ outcomes.
  • The critical threshold is the largest kk such that XX can be decomposed this way, often denoted z(X)z(X) for the optimal value.

This strategy ensures all the ex-ante allocation properties of XX (such as strategy-proofness, fairness, and envy-freeness) are preserved in the realized lottery, while also eliminating catastrophic shortfall outcomes in assignment size.

3. Algorithmic Methods and Computational Complexity

For robustly (ordinally) ex-post efficient XX (notably, the output of the Probabilistic Serial mechanism, PS), Demeulemeester et al. (Demeulemeester et al., 2021) provide a polynomial-time decomposition algorithm:

  • (a) Augment XX with dummy rows or columns if μ(X)N\mu(X)\notin \mathbb{N}.
  • (b) Iteratively extract matchings MtM^t with at least μ(X)\lfloor \mu(X) \rfloor assignments via cycle-pivot/network-flow steps, maintaining marginal constraints.
  • (c) Repeat until XX is decomposed entirely into such matchings, with the number of steps bounded by O(n+m)O(n + m).

If XX is not robustly ex-post efficient (e.g., for general outputs or RSD), the MD-SD(X)(X) problem is NP-hard, as even checking feasibility for a decomposition into ex-post efficient matchings is NP-complete (Aziz et al., 2015).

To address large instances or generic XX, a column generation approach is proposed:

  • Solve a restricted master LP [RMP(k)(k)] over a subset of matchings of size k\ge k.
  • Iteratively add new columns (matchings) with negative reduced cost using a dual/pricing subproblem.
  • For large support spaces (as in RSD with n!n! matchings), only a moderate initial sample of matchings is used, and columns are added as needed.

In practice, even incomplete convergence yields nearly optimal solutions (e.g., residual deviations <103< 10^{-3}, α>0.97\alpha^* > 0.97).

4. Theoretical Results and Bounds

Key guarantees for the probability antimatching strategy as applied to matching include:

  • MD(X)(X) without the ex-post efficiency constraint is always solvable in polynomial time, yielding a decomposition where every matching in the support has size μ(X)\ge \lfloor \mu(X) \rfloor.
  • For RSD (Random Serial Dictatorship) assignments, the worst-case support size z(XRSD)z(X^{\mathrm{RSD}}) respects the bounds:

p(I)z(XRSD)μ(XRSD)p^-(I) \leq z(X^{\mathrm{RSD}}) \leq \lfloor \mu(X^{\mathrm{RSD}}) \rfloor

where p(I)p^-(I) is the minimal size over all ex-post efficient matchings.

  • Sharper bounds: z(XRSD)>12μ(XRSD)z(X^{\mathrm{RSD}}) > \tfrac{1}{2} \lfloor \mu(X^{\mathrm{RSD}}) \rfloor, z(XRSD)<2p(I)z(X^{\mathrm{RSD}}) < 2 p^-(I), with both bounds asymptotically tight.

These properties empirically yield improvements (worst-case assignment size zz vs. naïve RSD) of 57%5\text{–}7\% of nn on large real-world school allocation data.

5. Probabilistic Antimatching in Human Choice Behavior

In the context of stochastic human choice, the antimatching strategy provides the avoidance-oriented counterpart to probability matching. Its formalization as vector reflection within the probability simplex supports robust, geometric operations:

  • Antimatching, computed as ma=2upm_a = 2u - p, directly mirrors an agent’s observed or inferred environmental probabilities to minimize predictability when “hiding” rather than “seeking.”
  • Observed human choices in hide-and-seek experiments cluster near convex combinations of minimizing and antimatching vectors:

bαhxh+βhma,0αh,βh1b \approx \alpha_h x_h + \beta_h m_a,\quad 0 \leq \alpha_h, \beta_h \leq 1

where xhx_h is the minimizer (assigns all mass to the argmin of pp).

  • Across multiple experimental settings (N=2,3,5,7N=2,3,5,7), participants’ behaviors are well modeled by this two-basis policy representation: both avoidance and pursuit are captured by shifting weights between maximizing/minimizing and matching/antimatching.

Statistically, this approach explains over 95% of the variance in participant choice frequencies, with residual norms <0.05<0.05 for N5N\le5 (DiBerardino et al., 5 Nov 2025).

6. Connections to Matching, Efficiency, and Mechanism Design

For allocation mechanisms, the antimatching concept highlights the robustness of realized outcomes in random assignment:

  • By enforcing matchings with large worst-case assignment size in the support of randomizations, practitioners ensure that the assignment process is “pessimist-proof,” immune to extremely poor realization outliers.
  • Ex-post Pareto efficiency in every realized outcome is preserved when robust decompositions are feasible, linking antimatching to classic mechanism design desiderata (strategy-proofness, envy-freeness).
  • The algorithmic distinction between mechanisms that are robust ex-post efficient (like Probabilistic Serial) and those that are not (such as RSD) defines the computational boundary for tractability in antimatching decompositions.

In stochastic choice modeling, the antimatching policy offers a geometric formalism mirroring probability matching and links human avoidance to structured transformations in the representation space.

7. Practical Implications and Limitations

Application of the probability antimatching strategy in assignment contexts is especially valuable for school choice, housing allocation, and other settings where fairness and outcome reliability are paramount. The established polynomial-time algorithms facilitate tractable deployment for certain classes of probabilistic assignments (notably, Ordinally Efficient/PS outcomes), while the column generation approach allows near-optimal performance in large-scale instances.

In human behavioral modeling, the antimatching framework supports compact, high-explanatory-power accounts of empirical avoidance data. The need to project out-of-simplex reflected vectors back onto the legal space introduces mild approximation effects but aligns closely with observed participant behavior.

A limitation persists in the NP-hardness of decomposing general probabilistic assignments into large ex-post efficient matchings, suggesting computational intractability for arbitrary assignment rules and highlighting the practical necessity of heuristic or approximate algorithms as the number of agents and objects scales.


The probability antimatching strategy, encompassing both the design of robust assignment decompositions and the geometric inversion of stochastic choice behaviors, provides a unified framework for analyzing worst-case guarantees and avoidance-centric policies across economics, game theory, and cognitive modeling (Demeulemeester et al., 2021, DiBerardino et al., 5 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Probability Antimatching Strategy.