Papers
Topics
Authors
Recent
Search
2000 character limit reached

Negatively Dependent Batches

Updated 23 February 2026
  • Negatively Dependent Batches are collections engineered so that extreme events co-occur less frequently than under independence, ensuring tighter variance control.
  • They are applied in quasi-Monte Carlo integration, stochastic optimization, and coding theory to achieve improved discrepancy bounds and decoding rates.
  • Construction methods include stratification, Latin hypercube sampling, and cyclic-shift protograph designs, offering practical guidelines for variance reduction and concentration.

Negatively dependent batches are collections of random variables, point sets, or codeword batches constructed so that simultaneous extreme events occur less often than under independence, as formalized by explicit upper bounds on joint probabilities or covariances. This property is exploited in statistical sampling, coding theory, risk aggregation, and stochastic optimization to achieve variance reduction, improved discrepancy, and concentration, and forms the backbone of several modern nonasymptotic probabilistic analyses of random structures.

1. Foundational Notions of Negative Dependence

Negative dependence quantifies the extent to which random variables (or sets) are less likely to be simultaneously large or small than under independence. The primary classes are:

  • γ-Negative Dependence: For Bernoulli variables T1,,TNT_1,\ldots,T_N, the collection is upper (resp., lower) γ-negatively dependent if for any subset U{1,,N}U\subset\{1,\dots,N\},

P(jU{Tj=1})γjUP(Tj=1)\mathbb{P}\Big(\bigcap_{j\in U}\{T_j=1\}\Big)\leq \gamma\prod_{j\in U} \mathbb{P}(T_j=1)

and similarly for all zeroes. The case γ=1\gamma=1 corresponds to strong negative dependence—specifically, negative orthant dependence (Gnewuch et al., 2021).

  • Negative Quadrant Dependence (NQD): For real random variables Y,ZY,Z,

P(Yy,Zz)P(Yy)P(Zz),\mathbb{P}(Y\leq y, Z\leq z)\leq \mathbb{P}(Y\leq y)\mathbb{P}(Z\leq z),

which forces Cov(Y,Z)0\operatorname{Cov}(Y,Z)\leq0 when moments exist (Chen et al., 2014).

  • Row-wise Extended Negative Dependence (END): For triangular arrays {Xn,k}\{X_{n,k}\}, END asserts

P(Xn,1>x1,,Xn,n>xn)Mnk=1nP(Xn,k>xk)\mathbb{P}(X_{n,1}>x_1,\dots,X_{n,n}>x_n)\leq M_n \prod_{k=1}^n\mathbb{P}(X_{n,k}>x_k)

for dominating sequence Mn1M_n\geq1 (Silva, 2019).

Other negative dependence properties relevant in batch contexts include negative association, negative upper/lower orthant dependence, and negative relation (Joag-Dev-Proschan) (Koike et al., 2022, Daly, 2015).

2. Negatively Dependent Batches in Sampling and Quasi-Monte Carlo

Negative dependence in batch sampling schemes is exploited to derive variance reduction and tight concentration, as formalized in several frameworks:

  • Randomized point sets: Anderson–Gnewuch–Hebbinghaus formalize Dd_d–γ-negative dependence for batches of points in [0,1]d[0,1]^d via indicator processes indexed by axis-parallel boxes. For
    • MC samples: γ=1\gamma=1 (independent)
    • Latin hypercube (LHS): γ=ed\gamma=e^d
    • Padded LHS (dd^\prime LHS, dd^{\prime\prime} padded MC): γ=ed\gamma=e^{d^\prime}

Dd_d–γ-negative dependence gives direct probabilistic nonasymptotic bounds for the star discrepancy of such batches (Gnewuch et al., 2021).

  • Variance reduction: Pairwise/batchwise negative dependence ensures that

Var(1Ni=1Nf(pi))Var(1Ni=1Nf(Ui))\operatorname{Var}\left(\frac{1}{N}\sum_{i=1}^N f(p_i)\right)\leq \operatorname{Var}\left(\frac{1}{N}\sum_{i=1}^N f(U_i)\right)

for monotone or quasi-monotone integrands, where UiU_i are independent uniform points and pip_i are drawn from a negatively dependent batch design. For such classes, stratified schemes, random rank-1 lattices, scrambled nets, and LHS all qualify, with explicit negative dependence properties constructed and checked (Gnewuch et al., 2019).(Gnewuch et al., 2021).

  • Discrepancy bounds: For Dd_d–γ-negative dependence, the star discrepancy DND^*_N admits

P(DN>cd/N)exp((0.5c210.7042γ)d)\mathbb{P}\left(D^*_N > c\sqrt{d}/\sqrt{N}\right)\leq \exp\left(-\left(0.5c^2-10.7042\gamma\right)^d\right)

with constants optimal for MC, LHS, and padded LHS batches up to value of γ\gamma (Gnewuch et al., 2021).

3. Negative Dependence in Batched Stochastic Optimization and Coding

Negatively dependent batches provide rigorous variance reduction in stochastic programming and are critical in erasure coding:

  • Sample-Average Approximation (SAA): In constructing confidence intervals for stochastic programs, the lower-bound estimator Ln,tL_{n,t} based on averaging multiple SAA replications vn(Dr)v_n(D_r) can be made more efficient by coupling batches through negative dependence (e.g., Sliced LHS, Sliced OA-based LH). Under monotonicity, Cov(vn(Dr),vn(Ds))0\mathrm{Cov}(v_n(D_r),v_n(D_s))\leq0, which reduces estimator variance compared to independent batching. Strength-2 orthogonal array batches (full SOLH) can eliminate all main and two-way terms in variance expansions, halving or better the standard error over independent LH (Chen et al., 2014).
  • Network Coding (BATS codes): In batched sparse codes and specifically BATS codes, batch dependence can significantly degrade performance. Positive batch dependence lowers decoding probability (variable node recovery) as the joint success probability becomes less than under independence. By constructing minimally (or nearly negatively) dependent batches—using techniques such as cyclic-shift protograph lifting and column-weight balancing—the decoding rate is substantially improved, and hardware complexity is reduced. Quantitative empirical comparisons between random BATS and cyclic-shift BATS codes demonstrate both higher and more stable rate with near-negative dependence construction (Qing et al., 2024).

4. Extremal Negative Dependence and Joint Mixes

Extremal negative dependence arises when the aggregate sum or batch outcome is as tightly concentrated as possible:

  • Joint Mixes: A joint mix is a random vector (X1,,Xd)(X_1,\ldots,X_d) such that iXi=c\sum_i X_i = c almost surely. When the marginals are identical, a joint mix (when it exists) achieves the minimal possible variance (zero) for the sum. For Gaussian marginals and under the variance-balance condition 2maxσi2<σi22\max \sigma_i^2 < \sum \sigma_i^2, joint mixes coincide with NA and NOD structures and are forced by quadratic multi-marginal optimal transport under symmetric uncertainty (Koike et al., 2022).

In risk aggregation, joint mixes or their block composition minimize the supremum variance of partial sums under dependence uncertainty. With additional partial conditional independence assumptions, negative association of joint mixes can be ensured.

  • Extreme Negative Dependence (END): END sequences exist for any target marginal with finite mean, and a probabilistic construction shows that the partial sum error Snnμ|S_n - n\mu| is uniformly bounded (almost surely) by a random variable depending only on the marginal, regardless of nn. This results in order O(1/n)O(1/n) convergence in estimators, outperforming standard O(1/n)O(1/\sqrt{n}) MC (Wang et al., 2014).

5. Strong Laws and Concentration for Negatively Dependent Arrays

Strong laws of large numbers (SLLN) extend to weakly dominated, row-wise END triangular arrays under very general moment and tail decay conditions:

  • Row-wise END SLLN: For arrays {Xn,k}\{X_{n,k}\} with END controlled by a dominating sequence MnM_n and mean-dominated by XL1X\in L^1, under tractable norming and truncation, one obtains almost sure convergence:

1bnk=1n(Xn,kEXn,k)0a.s.\frac{1}{b_n}\sum_{k=1}^n \big(X_{n,k}-\mathbb{E}X_{n,k}\big) \to 0 \qquad \text{a.s.}

as nn\to\infty. Corollaries obtain complete convergence for polynomial norming with weak L2pL^{2p} moments, aligning with classical independent results and subsuming earlier findings (Silva, 2019).

  • Rosenthal's Inequality under Negative Dependence: Kolmogorov and Rosenthal-type bounds for partial sums and maxima survive with the same exponents (after constants) for negatively dependent sequences under both additive and sub-linear expectations, enabling SLLN under optimal integrability conditions (e.g. finite Choquet integral) (Zhang, 2014).
  • Concentration and Stochastic Orderings: Negatively dependent sums, including totally negatively dependent indicators, are more concentrated than the Poisson of equal mean in convex order and admit tight entropy, Poisson, and binomial approximation, with sharper concentration inequalities and sub-Poissonian tails (Daly, 2015).

6. Practical Guidelines and Construction Methods

Several concrete algorithmic and design principles for constructing negatively dependent batches arise:

Domain Key Construction Principles Example/Reference
Quasi-Monte Carlo Stratification, pairwise/orthant ND, coordinatewise NQD Latin hypercube, OA designs (Gnewuch et al., 2021, Gnewuch et al., 2019)
SAA/stochastic opt. Sliced LHS or OA-based slicing, enforced negative dependence across batches SAA lower bound validation (Chen et al., 2014)
Coding Theory Base-protograph design, cyclic-shift lifting, column-degree balancing Cyclic-shift BATS (Qing et al., 2024)
Risk Aggregation Block mixability, END construction, latent-center or conditional-independence algorithms Complete mixability, END (Wang et al., 2014, Koike et al., 2022)

General prescriptions include: enforce stratification or orthogonality at the batch level, control column- and row-weights in combinatorial designs, and use conditionally independent or variance-balanced splitting for exchangeable or Gaussian settings.

7. Optimality, Trade-offs, and Theoretical Limits

Probabilistic discrepancy bounds for negatively dependent batches in [0,1]d[0,1]^d are optimal up to constants in d/N\sqrt{d}/\sqrt{N}. In batch construction, there is a trade-off between negative dependence (small γ\gamma) and other objectives such as marginal (projection) variance or design implementability—slight loss in orthant-negativity can produce dramatic variance reduction in projections or decoding rate improvements (as in padded LHS or protograph design in BATS).

Extremal negative dependence (e.g., joint mix, END) provides unattainable ideals in the general, heterogeneous-marginal setting, forcing variance-balancing trade-offs; for identical marginals, they offer unique optimality in multi-marginal problems or risk-aggregation (Koike et al., 2022, Wang et al., 2014).

In summary, negatively dependent batches—appropriately constructed—enable explicit, tight probabilistic controls in sampling, optimization, coding, and aggregation, with powerful tools now available for their systematic analysis and practical deployment (Gnewuch et al., 2021, Gnewuch et al., 2019, Chen et al., 2014, Qing et al., 2024, Koike et al., 2022, Daly, 2015, Wang et al., 2014, Silva, 2019, Zhang, 2014).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Negatively Dependent Batches.