Sliced Rényi Pufferfish Privacy
- Sliced Rényi Pufferfish Privacy (SRPP) generalizes Pufferfish privacy by using one-dimensional directional Rényi divergences for tractable, geometry-aware privacy guarantees.
- It defines Ave-SRPP and Joint-SRPP aggregations that enable closed-form anisotropic noise calibration, addressing challenges in high-dimensional optimal transport.
- SRPP introduces practical composition methods such as the History-Uniform Cap and ms-HUC to support iterative learning while balancing privacy and utility.
Sliced Rényi Pufferfish Privacy (SRPP) generalizes the Pufferfish privacy framework by leveraging directional (sliced) Rényi divergences for privacy accounting. SRPP addresses two central obstacles in Renyi Pufferfish Privacy (RPP): the prohibitive complexity of high-dimensional optimal transport and the lack of a mechanism-agnostic composition rule for iterative learning. SRPP achieves tractable, geometry-aware privacy guarantees by replacing high-dimensional comparisons with a collection of one-dimensional directional comparisons, under a set of unit vectors (“slice profile”). It enables closed-form, statistically stable, and anisotropic noise calibrations for privatization mechanisms, and offers rigorous composition for iterative deep learning via the introduction of the History-Uniform Cap (HUC) and its mean-square variant (ms-HUC) (Zhang et al., 30 Nov 2025).
1. Formal Definition and Divergence Framework
Given probability measures on with respective densities , and a unit vector , the order- directional Rényi divergence is defined by
where are the push-forward densities under projection .
Aggregating these divergences over a slice profile , with weights , two aggregation schemes are introduced:
- Ave-SRPP (Average Sliced Rényi Pufferfish Privacy):
- Joint-SRPP (Joint Log-Moment Sliced Rényi Pufferfish Privacy):
A mechanism satisfies -Ave-SRPP if, for all secret-pairs and priors ,
and similarly for Joint-SRPP using .
Ordering is established by
where is the standard order- Rényi divergence (Zhang et al., 30 Nov 2025).
2. Slicing Geometry and Slice Profile
The slice profile is typically composed of directions drawn independently and uniformly from the unit sphere . Typical practice uses to balance approximation fidelity against computational cost. Increasing provides a closer approximation to the continuous setting, whereas low can undersample critical geometric features. Weights may be uniform or adapted to data geometry, further enabling geometry-aware privacy calibration.
3. Sliced Wasserstein Mechanisms and Noise Calibration
To privatize a -dimensional numerical query , SRPP mechanisms avoid computing high-dimensional Wasserstein sensitivities. Instead, each direction is assigned a one-dimensional sensitivity:
where denotes the -Wasserstein distance on . Concretely, for and , .
Additive Gaussian Noise Calibration: For , is . The per-direction "shift-Rényi envelope" for the $1$-D Gaussian case is
Two sensitivity aggregations are defined:
- Average squared sensitivity
- Worst-slice sensitivity
Mechanism calibration theorems:
| Mechanism | Noise Variance Condition | SRPP Type |
|---|---|---|
| Ave-SRPE Gaussian | (α, ε, ω)-Ave-SRPP Envelope | |
| Joint-SRPE Gaussian | (α, ε, ω)-Joint-SRPP |
If these conditions are satisfied, the corresponding (Ave or Joint) SRPP guarantee is realized (Zhang et al., 30 Nov 2025).
4. SRPP Envelope (SRPE): Upper Bounds and Implementability
A per-slice shift-Rényi envelope is defined by
where is the law of . For any additive and slice ,
Aggregating, the SRPP Envelopes (abbreviated as SRPE) are:
- Ave-SRPE:
- Joint-SRPE:
If , the mechanism satisfies -Ave-SRPE; similarly for the Joint variant.
5. Iterative Learning and SRPP-SGD
SRPP-SGD specializes to iterative learning by privatizing each SGD update via gradient clipping and Gaussian noise. Compositional privacy accounting is achieved using the History-Uniform Cap (HUC).
History-Uniform Cap (HUC): For a slice profile , the vector is a HUC at iteration if, for all secret-pairs, prior , any trajectory , and any coupling of ,
almost surely over . This is equivalent to the existence of a positive semidefinite matrix with and for all .
Existence via Gradient Clipping and Lipschitz Regularity: When per-example gradients are -clipped at , batch-size is , at most samples differ, and per-step map is slicewise-Lipschitz with constant , then a HUC is given by
Mean-square HUC (ms-HUC): Replacing the worst-case with mean-square bound yields
Moments-accountant composition: For ,
The total per-slice cost after steps is . The noise scale conditions across steps are:
- Ave-SRPP-SGD:
- Joint-SRPP-SGD:
These results extend to mean-squared settings for ms-SRPP-SGD (Zhang et al., 30 Nov 2025).
6. Composition Properties
If mechanisms , each on the same dataset and each satisfying -Ave-SRPP (or Joint, ms-Ave, ms-Joint) for the same slice profile , are released independently, then their product satisfies -SRPP of the same type. This holds by tensorization of Rényi divergence (per slice or sliced channel) and aggregation via averaging or log-moment (Zhang et al., 30 Nov 2025).
7. Experimental Validation and Empirical Behavior
Experiments were conducted both for static query privatization and iterative learning.
- Static queries (Adult, Cleveland Heart, Student Performance): Using both Ave-SRPE and Joint-SRPE mechanisms (, random slices, and calibrated as above), privatized queries included per-secret statistics and model parameters (e.g., means, variances, logistic regression parameters). As increases, mean squared error (MSE) decreases and attacker accuracy improves, but privacy degrades. For small , MAP attacker accuracy remains near the prior baseline; the Joint mechanism is consistently more conservative than Ave (higher MSE, lower attack accuracy).
- Iterative learning (CIFAR-10, ResNet-22): The secret is label presence (“cat”), with two scenarios differing by examples. The DP-SGD pipeline applied gradient clipping and Gaussian noise by the SRPP-SGD and ms-SRPP-SGD formulas (). ms-SRPP-SGD required less noise for equivalent and achieved higher test accuracy than group-DP-SGD and (worst-case) SRPP-SGD. Overfitting experiments show ms-SRPP-SGD limits membership inference (ROC AUC approaches $0.5$) under strong privacy budgets (Zhang et al., 30 Nov 2025).
| Mechanism | Static Query Utility | Iterative Test Acc. | Attacker Advantage |
|---|---|---|---|
| Ave-SRPE | Lower MSE | Intermediate | Higher |
| Joint-SRPE | Higher MSE | More conservative | Lower |
| ms-SRPP-SGD | Highest | Highest | Smallest |
| group-DP-SGD | Most conservative | Lowest | Smallest |
Summary
Sliced Rényi Pufferfish Privacy replaces high-dimensional RPP benchmarks with aggregated directional Rényi divergences, enabling tractable, geometry-aware privacy guarantees and closed-form, anisotropic noise calibration. SRPP supports practical privacy composition for both static and iterative (SGD) settings, yielding quantifiable utility gains over conventional high-dimensional Pufferfish and group DP methods (Zhang et al., 30 Nov 2025).