Papers
Topics
Authors
Recent
Search
2000 character limit reached

Blanket Divergence in Differential Privacy

Updated 29 January 2026
  • Blanket divergence is a privacy metric that quantifies how shuffling local randomizer outputs amplifies privacy by leveraging the minimal 'blanket density' and the shuffle index.
  • It provides a rigorous framework through asymptotic analysis and concentration bounds to measure privacy loss under varying participation rates.
  • Practical implementations use FFT-based algorithms to compute divergence for mechanisms like k-randomized response and generalized Gaussian, ensuring controlled error in privacy accounts.

Blanket divergence is a fundamental measure in the analysis of privacy amplification by shuffling in distributed data collection, particularly within the shuffle model of local differential privacy. As established in the work of Takagi et al., the blanket divergence encapsulates the contribution of the data-independent part of the local randomizer (“blanket density”) to the privacy guarantee when output messages are shuffled before aggregation. Its asymptotic behavior is governed by a single parameter—the shuffle index χχ—that quantifies the efficiency of privacy amplification with respect to both the randomizer and the participation rate.

1. Formal Definition of Blanket Divergence

Let R:XY\mathcal R: \mathcal X \to \mathcal Y denote a local randomizer, such that for each xXx\in\mathcal X, the output distribution has density Rx(y)\mathcal R_x(y) relative to a base measure on Y\mathcal Y. The blanket density is given by: R(y)=infxXRx(y)γ=YR(y)dyRBG(y)=1γR(y)\underline{\mathcal R}(y) = \inf_{x\in\mathcal X}\mathcal R_x(y) \qquad \gamma = \int_{\mathcal Y}\underline{\mathcal R}(y)\,dy \qquad \mathcal R_{\mathrm{BG}}(y) = \frac{1}{\gamma}\underline{\mathcal R}(y) For neighboring inputs x1x1x_1\ne x_1', and parameter ϵ0\epsilon\ge 0, define the privacy-amplification random variable: lϵ(y)=Rx1(y)eϵRx1(y)RBG(y),yRBGl_\epsilon(y) = \frac{\mathcal R_{x_1}(y) - e^\epsilon \mathcal R_{x_1'}(y)}{\mathcal R_{\mathrm{BG}}(y)}, \quad y \sim \mathcal R_{\mathrm{BG}} The blanket divergence is then the hockey-stick divergence bound: Deϵ,n,RBG,γblanket(Rx1Rx1)=1nγE[max{i=1Mlϵ(Yi),0}]\mathcal D_{e^\epsilon, n, \mathcal R_{\mathrm{BG}},\gamma}^{\mathrm{blanket}}(\mathcal R_{x_1}\|\mathcal R_{x_1'}) = \frac{1}{n\gamma} \mathbb E\left[\max\left\{\sum_{i=1}^M l_\epsilon(Y_i),\, 0\right\}\right] where MBinomial(n,γ)M \sim \mathrm{Binomial}(n, \gamma) and Y1,,YMY_1,\dots,Y_M are i.i.d. samples from RBG\mathcal R_{\mathrm{BG}}. Alternatively, after a size-biasing argument,

Dblanket=E[lϵ(Y1){Pr[i=1Mlϵ(Yi)>0Y1]}]\mathcal D^{\mathrm{blanket}} = \mathbb E\left[l_\epsilon(Y_1)\left\{\Pr\left[\sum_{i=1}^{M'} l_\epsilon(Y_i) > 0\,|\,Y_1\right]\right\}\right]

with M1+Binomial(n1,γ)M'\sim 1+\mathrm{Binomial}(n-1,\gamma). The blanket RBG\mathcal R_{\mathrm{BG}} represents the participation-weighted minimal output probability, and lϵ(Yi)l_\epsilon(Y_i) quantifies the per-sample privacy loss; the aggregate divergence measures the degree of privacy under random participation and maximal adversarial alignment.

2. Asymptotic Expansion Under Central Limit Regime

Under mild moment and nondegeneracy assumptions, without requiring pure-LDP, consider a vanishing privacy parameter ϵn0\epsilon_n \to 0, subject to ϵn=ω(n1/2)\epsilon_n=\omega(n^{-1/2}) and ϵn=O(logn/n)\epsilon_n=O(\sqrt{\log n/n}). Let

Zi=Bilϵn(Yi),BiBernoulli(γ),Sn=i=1nZiZ_i = B_i\,l_{\epsilon_n}(Y_i),\quad B_i \sim \mathrm{Bernoulli}(\gamma),\quad S_n = \sum_{i=1}^n Z_i

with Var(Zi)=σϵn2>0\mathrm{Var}(Z_i) = \sigma_{\epsilon_n}^2>0 and mean μϵn=γ(1eϵn)\mu_{\epsilon_n} = \gamma(1-e^{\epsilon_n}). Setting

tn=nμϵnσϵnnt_n = -\frac{n\mu_{\epsilon_n}}{\sigma_{\epsilon_n}\sqrt{n}}

the blanket divergence admits an asymptotic expansion (via Edgeworth and moderate deviation theory): Deϵn,n,Rref,γblanket=φ(χϵnn)1χ3ϵn2n3/2(1+o(1))\mathcal D^{\mathrm{blanket}}_{e^{\epsilon_n}, n, \mathcal R_{\mathrm{ref}}, \gamma} = \varphi(\chi \epsilon_n \sqrt{n}) \,\frac{1}{\chi^3 \epsilon_n^2 n^{3/2} (1+o(1))} where φ\varphi is the standard normal density and χ\chi is the shuffle index. This result provides a precise quantification of privacy amplification, showing that the leading term depends only on χ\chi, establishing the universally amplified regime of ϵn\epsilon_n via shuffling.

3. The Shuffle Index χχ and Mechanism Dependence

The shuffle index is defined as: χ:=γσσ2:=Var(l0(Y;x1,x1;Rref))\chi := \frac{\sqrt{\gamma}}{\sigma} \quad \sigma^2 := \mathrm{Var}\left(l_0(Y; x_1, x_1'; \mathcal R_{\mathrm{ref}})\right) In upper bound analysis, Rref=RBG\mathcal R_{\mathrm{ref}} = \mathcal R_{\mathrm{BG}}; in lower bound analysis, Rref=Rx\mathcal R_{\mathrm{ref}}=\mathcal R_x for some xx.

Interpretation: χ\chi characterizes the “shuffle efficiency,” quantifying how blanket mass and randomizer variability interact to yield the ϵϵn\epsilon\to\epsilon\sqrt{n} privacy amplification regime. Higher χ\chi yields stronger amplification.

Example Computations:

  • For kk-randomized response (kk-RR) with local ϵ0\epsilon_0:

p=eϵ0eϵ0+k1,q=1eϵ0+k1p = \frac{e^{\epsilon_0}}{e^{\epsilon_0}+k-1}\,,\, q = \frac{1}{e^{\epsilon_0}+k-1}

γ=kq,σ2=2k(pq)2,χlo=q2(pq)2\gamma = kq,\quad \sigma^2 = 2k(p-q)^2,\quad \chi_{\mathrm{lo}} = \sqrt{\frac{q}{2(p-q)^2}}

  • For the generalized Gaussian mechanism:

γ=infx[0,1]β2cΓ(1/β)eyxβ/cβdy\gamma = \int \inf_{x\in[0,1]} \frac{\beta}{2c\Gamma(1/\beta)} e^{-|y-x|^\beta/c^\beta} dy

and σ2\sigma^2 is the variance of l0(y)l_0(y) (as above). Numerical or closed-form computation arises for β=1,2\beta=1,2.

4. Tightness of Upper and Lower Bounds: Structural Conditions

Theorem 3.4 (Structural Condition) establishes the regime under which the blanket divergence bounds are tight. For all pairs (x1,x1)(x_1, x_1'), define A(x1,x1)={y:Rx1(y)Rx1(y)}A(x_1,x_1') = \{y: \mathcal R_{x_1}(y) \ne \mathcal R_{x_1'}(y)\}, and shuffle indices χlo,χup\chi_{\mathrm{lo}}, \chi_{\mathrm{up}} as the infima over references RBG,Rx\mathcal R_{\mathrm{BG}}, \mathcal R_x, respectively.

  • Always χupχlo\chi_{\mathrm{up}} \geq \chi_{\mathrm{lo}}.
  • Equality (χup=χlo\chi_{\mathrm{up}} = \chi_{\mathrm{lo}}) holds for a pair (x1,x1)(x_1^*, x_1'^*) iff there exists xXx^* \in \mathcal X s.t.

Rx(y)=γRBG(y)=infzXRz(y)for a.e. yA(x1,x1)\mathcal R_{x^*}(y) = \gamma\, \mathcal R_{\mathrm{BG}}(y) = \inf_{z\in\mathcal X} \mathcal R_z(y) \quad \text{for a.e. } y \in A(x_1^*, x_1'^*)

For kk-RR with k3k\ge3, the minimal variance reference exactly saturates the blanket on the disagreement set, so the band collapses (asymptotically exact bounds). For generalized Gaussian mechanisms, no single xx saturates the blanket over the full disagreement region, resulting in distinct χlo,χup\chi_{\mathrm{lo}}, \chi_{\mathrm{up}}.

5. Asymptotic Privacy Band for (ϵn,δn)(\epsilon_n, \delta_n)-DP

Fix a target δnα/n\delta_n \approx \alpha/n, α>0\alpha>0, and for any χ>0\chi>0, define: ϵn(α,χ)=ln(1+2χ2nW(n2αχ2π))\epsilon_n(\alpha, \chi) = \ln\left( 1+\sqrt{\frac{2}{\chi^2 n} W\left( \frac{\sqrt{n}}{2\alpha\chi\sqrt{2\pi}} \right) }\right) where WW is the principal Lambert-WW function.

There exist constants

χup=infx1x1χup(x1,x1),χlo=infx1x1χlo(x1,x1)\underline\chi_{\mathrm{up}} = \inf_{x_1\ne x_1'}\chi_{\mathrm{up}}(x_1,x_1'),\qquad \underline\chi_{\mathrm{lo}} = \inf_{x_1\ne x_1'}\chi_{\mathrm{lo}}(x_1,x_1')

such that for all large nn,

ϵn(α,χup)ϵnϵn(α,χlo)andδn=αn(1+o(1))\epsilon_n(\alpha, \underline\chi_{\mathrm{up}}) \le \epsilon_n^* \le \epsilon_n(\alpha, \underline\chi_{\mathrm{lo}}) \qquad\text{and}\qquad \delta_n = \frac{\alpha}{n}(1+o(1))

Thus, the privacy-locus ϵn\epsilon_n^* is tightly sandwiched within an asymptotic band defined by (χlo,χup)(\underline\chi_{\mathrm{lo}}, \underline\chi_{\mathrm{up}}).

6. Practical Blanket-Divergence Accountant via FFT

Computing the blanket divergence for finite nn with controlled relative error η>0\eta>0 and running time O~(n/η)\tilde O(n/\eta) proceeds by the following steps:

  • Truncation: Restrict lϵ(Y)l_\epsilon(Y) to [s,s+win][s,s+w^{\mathrm{in}}] so that tail mass q=Pr[lϵ(Y)[s,s+win]]=O((nγ)1η)q=\Pr[l_\epsilon(Y)\notin[s,s+w^{\mathrm{in}}]] = O((n\gamma)^{-1}\eta).
  • Discretization: Discretize the truncated ZtrZ^{\mathrm{tr}} on a mesh of width hh, controlling mean-square error

Δ=E[Ztr]E[Zdi]=O(ηnγ)\Delta = |\mathbb E[Z^{\mathrm{tr}}]-\mathbb E[Z^{\mathrm{di}}]| = O(\frac{\eta}{n\gamma})

and tail probabilities via Bernstein bounds.

  • FFT Convolution: Zero-pad and FFT to compute the convolution of the PMF of ZdiZ^{\mathrm{di}} with itself (n1)(n-1) times at cost O(NlogN)O(N\log N), N=wout/hN=w^{\mathrm{out}}/h.
  • Aggregate Probability: Recover the relevant tail via

Pr[lϵ(Y1)+i=2MZi>0]=Pr[i=2MZi>lϵ(Y1)]\Pr[l_{\epsilon}(Y_1 )+ \sum_{i=2}^M Z_i> 0] = \Pr[\sum_{i=2}^M Z_i > -l_\epsilon(Y_1)]

and combine with the relevant measures Rx1,Rx1\mathcal R_{x_1}, \mathcal R_{x_1'}.

Four error sources—truncation, discretization, aliasing, and CLT coupling—are tuned such that each contributes at most (η/4)D(\eta/4) D, yielding certified relative error O(η)O(\eta). In moderate deviation regime, typical parameters satisfy win=Θ(nα)w^{\mathrm{in}} = \Theta(n^\alpha), h=Θ(c/nlogn)h = \Theta(c/\sqrt{n\log n}), wout=Θ(nlogn)w^{\mathrm{out}} = \Theta(\sqrt{n\log n}), N=O(n/η(logn)2)N=O(n/\eta (\log n)^2), so total complexity is O~(n/η)\tilde O(n/\eta). The mid-point between one-sided bounds yields the final estimate.

Table: Blanket Divergence—Mechanism-Specific Parameters

Mechanism Type Blanket Mass γ\gamma Shuffle Index χlo\chi_{\mathrm{lo}}
kk-RR (k3k\ge3) kq,  q=1/(eϵ0+k1)kq,\;q=1/(e^{\epsilon_0}+k-1) q/2(pq)2\sqrt{q/2(p-q)^2}
Generalized Gaussian infxdy\int\inf_{x} \dots dy Numerical χ\chi via variance calculation

The blanket divergence, in conjunction with the shuffle index and FFT-based numerical accounting, establishes a rigorous, tight framework for privacy analysis in the shuffle model beyond pure-LDP assumptions (Takagi et al., 27 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Blanket Divergence.