Papers
Topics
Authors
Recent
2000 character limit reached

f-DP: Unified Privacy Analysis Framework

Updated 28 November 2025
  • f-DP is a generalization of differential privacy that uses hypothesis-testing trade-off functions to deliver lossless and compositional privacy analysis.
  • It offers exact composition, privacy amplification by subsampling, and robust auditing, making it effective for federated learning and decentralized protocols.
  • The framework enables optimal conversion to classical DP parameters, yielding sharper privacy-utility trade-offs for various mechanism designs.

ff-DP Approach

The ff-DP approach generalizes differential privacy (DP) using hypothesis-testing-based trade-off functions, yielding a lossless, compositional, and robust framework for analyzing privacy in diverse settings. It subsumes (ϵ,δ)(\epsilon,\delta)-DP and Rényi DP, enables optimal privacy accounting for complex mechanisms, and provides sharper privacy-utility trade-offs across mechanism design, federated learning, decentralized protocols, and communication-efficient private learning.

1. Definition and Theoretical Foundations

Let P,QP,Q be probability distributions (typically, the outputs of a randomized mechanism MM on adjacent datasets). For any randomized test ϕ ⁣:X[0,1]\phi\colon\mathcal{X}\to[0,1], define the Type I error αϕ=EP[ϕ]\alpha_\phi=\mathbb{E}_P[\phi] and Type II error βϕ=1EQ[ϕ]\beta_\phi=1-\mathbb{E}_Q[\phi]. The trade-off function (ROC curve) is

T(P,Q)(α)=infϕ:αϕαβϕ,α[0,1].T(P,Q)(\alpha) = \inf_{\phi:\,\alpha_\phi\le\alpha} \beta_\phi, \quad \alpha\in[0,1].

A function f ⁣:[0,1][0,1]f\colon[0,1]\to[0,1] is a valid trade-off if convex, continuous, nonincreasing, and f(α)1αf(\alpha)\le 1-\alpha. A mechanism MM is ff-differentially private (ff-DP) if for all adjacent datasets D,DD,D',

T(M(D),M(D))f.T(M(D), M(D')) \ge f.

Classical (ϵ,δ)(\epsilon,\delta)-DP is recovered as the special case f(ϵ,δ)(α)=max{0,1δeϵα,eϵ(1δα)}f_{(\epsilon,\delta)}(\alpha) = \max\{0, 1-\delta-e^\epsilon\alpha, e^{-\epsilon}(1-\delta-\alpha)\}, while Gaussian DP corresponds to f(α)=Φ(Φ1(1α)μ)f(\alpha) = \Phi(\Phi^{-1}(1-\alpha)-\mu) where Φ\Phi is the standard normal CDF (Dong et al., 2019, Wang et al., 2023).

Key theoretical properties:

  • Postprocessing invariance: ff-DP guarantees are preserved under arbitrary data-independent mappings.
  • Exact composition: ff-DP is closed under sequential and adaptive composition via tensor products of trade-off functions, without losing tightness (Dong et al., 2019).
  • Relation to RDP: Rényi differential privacy arises as a special case of ff-divergence-based relaxations (Asoodeh et al., 2020).

2. Composition, Subsampling, and Amplification

Composition plays a central role in privacy analysis:

  • For fif_i-DP mechanisms MiM_i, the composite mechanism is f1fnf_1\otimes\cdots\otimes f_n-DP, where

(f1f2)(α)=T(P1×P2,Q1×Q2)(α).(f_1\otimes f_2)(\alpha) = T(P_1\times P_2, Q_1\times Q_2)(\alpha).

For Gaussian DP, this yields compositional additivity of privacy budgets: Gμ1Gμ2=Gμ12+μ22G_{\mu_1}\otimes G_{\mu_2} = G_{\sqrt{\mu_1^2+\mu_2^2}} (Dong et al., 2019).

  • Privacy amplification by subsampling: If MM is ff-DP, applying MM to a random pp-fraction of the data yields Cp(f)C_p(f)-DP, where CpC_p is a resolvent operator on trade-off functions. For (ϵ,δ)(\epsilon,\delta)-DP, this produces strictly tighter bounds than classical analysis (Dong et al., 2019).
  • Group privacy: For gg-adjacent changes, the trade-off satisfies fgroup(α)=1(1f)g(α)f_{group}(\alpha) = 1 - (1-f)^{\circ g}(\alpha). In the context of Gaussian DP, this yields GgμG_{g\mu} (Dijk et al., 2022).

3. ff-DP in Mechanism Design: Discrete and Mixture Mechanisms

ff-DP directly supports non-Gaussian, discrete, or mixture mechanisms:

  • Finite-output and compressed mechanisms: For binomial-noise, binomial mechanisms, and stochastic sign-based compressors, ff-DP bounds can be computed exactly as lower envelopes of the induced trade-off functions (often closed form by the Neyman–Pearson lemma) (Jin et al., 2023). This yields optimal privacy analysis in distributed mean estimation, enabling arbitrarily low communication cost without sacrificing accuracy or privacy—thereby breaking the conventional privacy-communication-accuracy trilemma.
  • Privacy amplification by sparsification: Ternary compressors and random dropping schemes provide privacy gains reflected as flat segments in the ff-DP curve, unattainable in pure (ϵ,δ)(\epsilon,\delta)-DP (Jin et al., 2023).
  • Mixture mechanisms: The joint concavity of trade-off functions (Lemma 2.1) and advanced joint concavity (Lemma 4.3) yield pointwise, near-optimal bounds for mechanisms involving random initialization, shuffling, or batch subsampling. ff-DP's inequalities unify and strengthen all previous mixture analyses; shuffling models and randomized initializations are handled seamlessly, resulting in significant privacy amplification compared to prior (ϵ,δ)(\epsilon,\delta) bounds (Wang et al., 2023).

4. ff-DP in Distributed and Federated Learning

The ff-DP approach is pivotal in privacy accounting for federated and decentralized learning:

  • Federated learning convergence: In classical FL with per-round noise, standard composition yields privacy losses that diverge as the number of rounds grows. Using ff-DP, and leveraging the shifted interpolation technique, provably convergent bounds are achieved for both noisy FedAvg and FedProx, even for non-convex objectives (Sun et al., 28 Aug 2024). Explicit convergence rates can be computed in terms of exact trade-off functions, and these can be converted to (ϵ,δ)(\epsilon,\delta)-DP or RDP without loss.
  • Decentralized protocols: In random-walk or gossip-based decentralized SGD, ff-DP quantifies pairwise privacy between any two users via the first-hit time distributions of the communication Markov chain. This enables granular, topology-aware privacy accounting (PN-ff-DP). Moreover, secret-based correlated noise protocols, where noise is shared between users via cryptographic secrets, are analyzed via ff-DP to achieve near-central utility in the presence of honest-but-curious adversaries (Li et al., 22 Oct 2025).
  • Empirical comparison and impact: Across complex network topologies, ff-DP-based accounting was empirically shown to yield %%%%50P,QP,Q51%%%% less noise (and 5–15% higher utility) relative to RDP-based accounting at fixed (ϵ,δ)(\epsilon,\delta) (Li et al., 22 Oct 2025).

5. Auditing, Estimation, and Black-Box Validation

Auditing mechanisms for ff-DP are enabled by the hypothesis-testing foundation:

  • Statistical estimation of ff-DP: Black-box estimation, using perturbed likelihood-ratio tests, kernel density estimation, and k-NN Bayes classification, provides uniform confidence intervals on the entire trade-off curve, with nonparametric convergence guarantees (Askin et al., 10 Feb 2025).
  • Empirical audits in practice: One-run randomized injection games and tail-bound-based scoring enable empirical estimation of ff-DP (and thus (ϵ,δ)(\epsilon,\delta)) from a single run of a private mechanism. Empirical results on DP mechanisms demonstrate that ff-DP-based audits deliver up to 2×\times tighter privacy estimates than prior (ϵ,δ)(\epsilon,\delta) audits, particularly for high-dimensional or Gaussian mechanisms (Mahloujifar et al., 29 Oct 2024).
  • Sample complexity and robustness: Empirical ff-DP estimation is feasible and practical at sample sizes m=105m=10^510710^7 for canonical mechanisms, overcoming the scalability limits of earlier black-box DP audits.

6. Conversion to Classical DP, Bounds, and Accounting

Given an ff-DP trade-off ff, the optimal (ϵ,δ)(\epsilon,\delta)-DP parameters can be derived via convex conjugation: δ(ϵ)=1+f(eϵ),\delta(\epsilon) = 1 + f^*(-e^\epsilon), where ff^* is the Legendre–Fenchel dual. For symmetric ff, this conversion is tight and lossless (Dong et al., 2019, Wang et al., 2023). This mechanism yields strictly improved conversions from RDP to (ϵ,δ)(\epsilon,\delta)-DP compared to the classical "moment accountant"—resulting in substantial reductions in required noise for private SGD, and enabling up to 100 additional training rounds under the same privacy budget in concrete settings (Asoodeh et al., 2020).

In mixture mechanisms (e.g., shuffling, random initialization), the advanced joint concavity of trade-offs yields improved bounds in the low-α\alpha regime, crucial for small-δ\delta (ϵ,δ)(\epsilon,\delta) privacy (Wang et al., 2023).

7. Impact, Limitations, and Future Directions

The ff-DP framework achieves an information-theoretically lossless, optimal, and compositional theory of privacy, underpinning:

Potential limitations include computational costs for numerical composition in large-scale graphs, and extensions to time-varying or non-Gaussian mechanisms. Future research is expected to develop scalable computational tools for ff-DP evaluation and numerical accounting, extend the analysis to more adversarial threat models, and further generalize the application of ff-DP to interactive and adaptive data analysis regimes.

References:

(Dong et al., 2019, Asoodeh et al., 2020, Wang et al., 2023, Jin et al., 2023, Dijk et al., 2022, Sun et al., 28 Aug 2024, Li et al., 22 Oct 2025, Askin et al., 10 Feb 2025, Mahloujifar et al., 29 Oct 2024).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to $f$-DP Approach.