Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sliced Rényi Pufferfish Privacy

Updated 7 December 2025
  • Sliced Rényi Pufferfish Privacy (SRPP) generalizes Pufferfish privacy by using one-dimensional directional Rényi divergences for tractable, geometry-aware privacy guarantees.
  • It defines Ave-SRPP and Joint-SRPP aggregations that enable closed-form anisotropic noise calibration, addressing challenges in high-dimensional optimal transport.
  • SRPP introduces practical composition methods such as the History-Uniform Cap and ms-HUC to support iterative learning while balancing privacy and utility.

Sliced Rényi Pufferfish Privacy (SRPP) generalizes the Pufferfish privacy framework by leveraging directional (sliced) Rényi divergences for privacy accounting. SRPP addresses two central obstacles in Renyi Pufferfish Privacy (RPP): the prohibitive complexity of high-dimensional optimal transport and the lack of a mechanism-agnostic composition rule for iterative learning. SRPP achieves tractable, geometry-aware privacy guarantees by replacing high-dimensional comparisons with a collection of one-dimensional directional comparisons, under a set of unit vectors (“slice profile”). It enables closed-form, statistically stable, and anisotropic noise calibrations for privatization mechanisms, and offers rigorous composition for iterative deep learning via the introduction of the History-Uniform Cap (HUC) and its mean-square variant (ms-HUC) (Zhang et al., 30 Nov 2025).

1. Formal Definition and Divergence Framework

Given probability measures P,QP, Q on Rd\mathbb{R}^d with respective densities p,qp, q, and a unit vector uSd1u \in S^{d-1}, the order-α\alpha (α>1)(\alpha > 1) directional Rényi divergence is defined by

Dα(u)(PQ)=1α1logRpu(t)αqu(t)1αdtD^{(u)}_{\alpha}(P \Vert Q) = \frac{1}{\alpha-1} \log \int_{\mathbb{R}} p_u(t)^\alpha \cdot q_u(t)^{1-\alpha} dt

where pu,qup_u, q_u are the push-forward densities under projection xx,ux \mapsto \langle x, u \rangle.

Aggregating these divergences over a slice profile U={u1,...,um}Sd1\mathcal{U} = \{u_1, ..., u_m\} \subset S^{d-1}, with weights ω=(ω1,...,ωm)\omega = (\omega_1, ..., \omega_m), two aggregation schemes are introduced:

  • Ave-SRPP (Average Sliced Rényi Pufferfish Privacy):

AveSDαω(PQ)==1mωDα(u)(PQ)\text{AveSD}^{\omega}_{\alpha}(P \Vert Q) = \sum_{\ell=1}^m \omega_\ell \, D^{(u_\ell)}_{\alpha}(P \Vert Q)

  • Joint-SRPP (Joint Log-Moment Sliced Rényi Pufferfish Privacy):

JSDαω(PQ)=1α1log(=1mωe(α1)Dα(u)(PQ))\text{JSD}^{\omega}_{\alpha}(P \Vert Q) = \frac{1}{\alpha-1} \log \left( \sum_{\ell=1}^m \omega_\ell \, e^{(\alpha-1) D^{(u_\ell)}_{\alpha}(P \Vert Q)} \right)

A mechanism MM satisfies (α,ε,ω)(\alpha, \varepsilon, \omega)-Ave-SRPP if, for all secret-pairs (si,sj)(s_i, s_j) and priors θ\theta,

AveSDαω(Law(Msi,θ)Law(Msj,θ))ε\text{AveSD}^{\omega}_{\alpha}(Law(M|s_i,\theta) \Vert Law(M|s_j,\theta)) \leq \varepsilon

and similarly for Joint-SRPP using JSDω\text{JSD}^{\omega}.

Ordering is established by

AveSDαω(PQ)JSDαω(PQ)Dα(PQ)\text{AveSD}^{\omega}_{\alpha}(P \Vert Q) \leq \text{JSD}^{\omega}_{\alpha}(P \Vert Q) \leq D_{\alpha}(P \Vert Q)

where Dα()D_{\alpha}(\cdot\Vert\cdot) is the standard order-α\alpha Rényi divergence (Zhang et al., 30 Nov 2025).

2. Slicing Geometry and Slice Profile

The slice profile U\mathcal{U} is typically composed of mm directions drawn independently and uniformly from the unit sphere Sd1S^{d-1}. Typical practice uses m100500m \approx 100-500 to balance approximation fidelity against computational cost. Increasing mm provides a closer approximation to the continuous setting, whereas low mm can undersample critical geometric features. Weights ω\omega_\ell may be uniform or adapted to data geometry, further enabling geometry-aware privacy calibration.

3. Sliced Wasserstein Mechanisms and Noise Calibration

To privatize a dd-dimensional numerical query f(X)f(X), SRPP mechanisms avoid computing high-dimensional Wasserstein sensitivities. Instead, each direction uUu \in \mathcal{U} is assigned a one-dimensional sensitivity:

Δu(f)=sup(si,sj),θW(Law(f(X),usi,θ),Law(f(X),usj,θ))\Delta_{\infty}^u(f) = \sup_{(s_i,s_j),\, \theta} W_{\infty}(\text{Law}(\langle f(X), u \rangle | s_i, \theta), \text{Law}(\langle f(X), u \rangle | s_j, \theta))

where WW_{\infty} denotes the \infty-Wasserstein distance on R\mathbb{R}. Concretely, for f:XRdf: \mathcal{X} \rightarrow \mathbb{R}^d and DDD \sim D', Δ1,u(f)=supDDf(D)f(D),u\Delta_{1,u}(f) = \sup_{D \sim D'} |\langle f(D) - f(D'), u \rangle|.

Additive Gaussian Noise Calibration: For NN(0,σ2Id)N \sim N(0, \sigma^2 I_d), N,u\langle N, u \rangle is N(0,σ2)N(0, \sigma^2). The per-direction "shift-Rényi envelope" for the $1$-D Gaussian case is

Rα(σ,z)=supazDα(N(0,σ2)N(a,σ2))=α2σ2z2R_{\alpha}(\sigma, z) = \sup_{|a| \leq z} D_{\alpha}(N(0,\sigma^2) \Vert N(a, \sigma^2)) = \frac{\alpha}{2\sigma^2} z^2

Two sensitivity aggregations are defined:

  • Average squared sensitivity Δˉ2==1mω(Δu)2\bar{\Delta}^2 = \sum_{\ell=1}^m \omega_\ell (\Delta^{u_\ell}_\infty)^2
  • Worst-slice sensitivity Δ2=max=1...m(Δu)2\Delta_*^2 = \max_{\ell=1...m} (\Delta^{u_\ell}_\infty)^2

Mechanism calibration theorems:

Mechanism Noise Variance Condition SRPP Type
Ave-SRPE Gaussian σ2=αΔˉ2/(2ε)\sigma^2 = \alpha \cdot \bar{\Delta}^2 / (2\varepsilon) (α, ε, ω)-Ave-SRPP Envelope
Joint-SRPE Gaussian σ2=αΔ2/(2ε)\sigma^2 = \alpha \cdot \Delta_*^2 / (2\varepsilon) (α, ε, ω)-Joint-SRPP

If these conditions are satisfied, the corresponding (Ave or Joint) SRPP guarantee is realized (Zhang et al., 30 Nov 2025).

4. SRPP Envelope (SRPE): Upper Bounds and Implementability

A per-slice shift-Rényi envelope is defined by

Rα(ζ,z)=supazDα(ζaζ)R_\alpha(\zeta, z) = \sup_{\|a\| \leq z} D_\alpha(\zeta_{-a} \Vert \zeta)

where ζa\zeta_{-a} is the law of NaN-a. For any additive M(x)=f(x)+NM(x) = f(x) + N and slice uu,

Dα(Law(M(X),usi,θ)Law(M(X),usj,θ))Rα(ζ,W(Ψ#uFsi,Ψ#uFsj))D_\alpha(\text{Law}(\langle M(X),u \rangle | s_i,\theta) \Vert \text{Law}(\langle M(X),u \rangle | s_j,\theta)) \leq R_\alpha(\zeta, W_\infty(\Psi^u_{\#} F_{s_i}, \Psi^u_{\#} F_{s_j}))

Aggregating, the SRPP Envelopes (abbreviated as SRPE) are:

  • Ave-SRPE:

ARα,ω=ωRα(ζ,Δu)AR^{\infty}_{\alpha,\omega} = \sum_{\ell} \omega_\ell \cdot R_\alpha(\zeta, \Delta^{u_\ell}_\infty)

  • Joint-SRPE:

JRα,ω=1α1log[ωe(α1)Rα(ζ,Δu)]JR^{\infty}_{\alpha,\omega} = \frac{1}{\alpha-1} \log\left[\sum_{\ell} \omega_\ell e^{(\alpha-1) R_\alpha(\zeta, \Delta^{u_\ell}_\infty)}\right]

If ARα,ωεAR^{\infty}_{\alpha,\omega} \leq \varepsilon, the mechanism satisfies (α,ε,ω)(\alpha, \varepsilon, \omega)-Ave-SRPE; similarly for the Joint variant.

5. Iterative Learning and SRPP-SGD

SRPP-SGD specializes to iterative learning by privatizing each SGD update via gradient clipping and Gaussian noise. Compositional privacy accounting is achieved using the History-Uniform Cap (HUC).

History-Uniform Cap (HUC): For a slice profile U={u}\mathcal{U} = \{u_\ell\}, the vector ht=(ht,)h_t = (h_{t,\ell}) is a HUC at iteration tt if, for all secret-pairs, prior θ\theta, any trajectory y<ty_{<t}, and any coupling of (X,X)(X,X'),

ft(X,y<t;r)ft(X,y<t;r),uht,|\langle f_t(X, y_{<t}; r) - f_t(X', y_{<t}; r), u_\ell \rangle| \leq \sqrt{h_{t,\ell}}

almost surely over (X,X,r)(X,X',r). This is equivalent to the existence of a positive semidefinite matrix HtH_t with uTHtu=ht,u_\ell^T H_t u_\ell = h_{t,\ell} and Δt,u2uTHtu|\langle \Delta_t, u \rangle|^2 \leq u^T H_t u for all uUu \in \mathcal{U}.

Existence via Gradient Clipping and Lipschitz Regularity: When per-example gradients are 2\ell_2-clipped at CC, batch-size is BtB_t, at most KtK_t samples differ, and per-step map TtT_t is slicewise-Lipschitz with constant Lt,L_{t,\ell}, then a HUC is given by

ht,=(2KtLt,CBt)2h_{t,\ell} = \left( \frac{2 K_t L_{t,\ell} C}{B_t} \right)^2

Mean-square HUC (ms-HUC): Replacing the worst-case KtK_t with mean-square bound Kt2=supcouplingE[Kt2]\overline{K}_t^2 = \sup_{\text{coupling}} \mathbb{E}[K_t^2] yields

ht,ms=(2Lt,CBt)2Kt2h_{t,\ell}^{ms} = \left( \frac{2 L_{t,\ell} C}{B_t} \right)^2 \overline{K}_t^2

Moments-accountant composition: For NtN(0,σ2I)N_t \sim N(0, \sigma^2 I),

εt,=α2σ2ht,\varepsilon_{t,\ell} = \frac{\alpha}{2 \sigma^2} h_{t,\ell}

The total per-slice cost after TT steps is t=1Tεt,\sum_{t=1}^T \varepsilon_{t, \ell}. The noise scale conditions across TT steps are:

  • Ave-SRPP-SGD: σ2α2εt,ωht,\sigma^2 \geq \frac{\alpha}{2\varepsilon} \sum_{t,\ell} \omega_\ell h_{t,\ell}
  • Joint-SRPP-SGD: σ2α2εtmaxht,\sigma^2 \geq \frac{\alpha}{2\varepsilon} \sum_t \max_\ell h_{t,\ell}

These results extend to mean-squared settings for ms-SRPP-SGD (Zhang et al., 30 Nov 2025).

6. Composition Properties

If JJ mechanisms M1,...,MJM_1, ..., M_J, each on the same dataset and each satisfying (α,εj,ω)(\alpha, \varepsilon_j, \omega)-Ave-SRPP (or Joint, ms-Ave, ms-Joint) for the same slice profile ω\omega, are released independently, then their product M(x)=(M1(x),...,MJ(x))M(x) = (M_1(x), ..., M_J(x)) satisfies (α,jεj,ω)(\alpha, \sum_j \varepsilon_j, \omega)-SRPP of the same type. This holds by tensorization of Rényi divergence (per slice or sliced channel) and aggregation via averaging or log-moment (Zhang et al., 30 Nov 2025).

7. Experimental Validation and Empirical Behavior

Experiments were conducted both for static query privatization and iterative learning.

  • Static queries (Adult, Cleveland Heart, Student Performance): Using both Ave-SRPE and Joint-SRPE mechanisms (α=4\alpha=4, m200m \approx 200 random slices, and calibrated σ2\sigma^2 as above), privatized queries included per-secret statistics and model parameters (e.g., means, variances, logistic regression parameters). As ε\varepsilon increases, mean squared error (MSE) decreases and attacker accuracy improves, but privacy degrades. For small ε\varepsilon, MAP attacker accuracy remains near the prior baseline; the Joint mechanism is consistently more conservative than Ave (higher MSE, lower attack accuracy).
  • Iterative learning (CIFAR-10, ResNet-22): The secret is label presence (“cat”), with two scenarios differing by Δ=20\Delta=20 examples. The DP-SGD pipeline applied gradient clipping CC and Gaussian noise by the SRPP-SGD and ms-SRPP-SGD formulas (α=16\alpha=16). ms-SRPP-SGD required less noise for equivalent ε\varepsilon and achieved higher test accuracy than group-DP-SGD and (worst-case) SRPP-SGD. Overfitting experiments show ms-SRPP-SGD limits membership inference (ROC AUC approaches $0.5$) under strong privacy budgets (Zhang et al., 30 Nov 2025).
Mechanism Static Query Utility Iterative Test Acc. Attacker Advantage
Ave-SRPE Lower MSE Intermediate Higher
Joint-SRPE Higher MSE More conservative Lower
ms-SRPP-SGD Highest Highest Smallest
group-DP-SGD Most conservative Lowest Smallest

Summary

Sliced Rényi Pufferfish Privacy replaces high-dimensional RPP benchmarks with aggregated directional Rényi divergences, enabling tractable, geometry-aware privacy guarantees and closed-form, anisotropic noise calibration. SRPP supports practical privacy composition for both static and iterative (SGD) settings, yielding quantifiable utility gains over conventional high-dimensional Pufferfish and group DP methods (Zhang et al., 30 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Sliced Renyi Pufferfish Privacy (SRPP).