Rényi Differential Privacy (RDP)
- Rényi Differential Privacy is a relaxation of differential privacy that uses Rényi divergence to measure and control privacy loss in randomized algorithms.
- It supports precise composition analyses in subsampling, adaptive mechanisms, and moments accounting, ensuring robust and modular privacy guarantees.
- RDP underpins modern private machine learning and synthetic data generation by enabling efficient noise tuning and conversion to classical (ε,δ)-DP.
Rényi Differential Privacy (RDP) is a relaxation of differential privacy formalized by Mironov (2017) that parameterizes the privacy loss of randomized algorithms using Rényi divergence, which is a moment-based generalization of the classical max-divergence used in differential privacy. RDP enables tight and modular analysis of privacy composition—particularly for mechanisms employing subsampling, adaptive composition, or moments accountants—and is now the analytic foundation for most state-of-the-art private machine learning pipelines.
1. Formal Definition and Divergence
Let and be two probability measures defined on the same measurable space. For any order , the Rényi divergence of order is
A randomized mechanism operating on databases satisfies -Rényi Differential Privacy if for every pair of adjacent datasets (differing in one element):
As , approaches the Kullback–Leibler divergence; as , approaches the max-divergence, recovering pure -differential privacy (Mironov, 2017).
2. Composition, Conversion, and Accounting
Sequential and Adaptive Composition
If mechanisms each satisfy -RDP (possibly with different but the same ), then their k-fold (possibly adaptive) composition is -RDP (Mironov, 2017, Wang et al., 2018). This enables linear, order-preserving tracking of privacy loss under repeated or adaptive use of RDP mechanisms.
Conversion to (, )-Differential Privacy
Any mechanism guaranteeing -RDP also satisfies –differential privacy for any :
(Mironov, 2017, Wang et al., 2018, Lécuyer, 2021).
The parameter is typically tuned to optimize for a fixed target .
Analytical Moments Accountant
The cumulant generating function (CGF) of the privacy loss for a mechanism is
with . The moments accountant accumulates CGFs additively for composed mechanisms:
and supports tight (, )-DP conversion via univariate convex optimization (Wang et al., 2018).
3. Amplification by Subsampling and Shuffling
Subsampling (selecting a random subset before applying a mechanism) and shuffling (randomly permuting outputs in the distributed local setting) both amplify privacy.
Subsampling
For Poisson sampling at rate , then applying an -RDP mechanism to the sample, the composed mechanism is -RDP, where
and are derived from the moments of (Wang et al., 2018). In the Gaussian case with sampling probability and noise , a tight closed form is for valid regimes (Mironov et al., 2019).
Shuffling
Shuffling can quadratically amplify privacy in the number of records under local randomization. If users each apply an -LDP mechanism, then shuffling yields
(Berthier et al., 2019, Girgis et al., 2021, Chen et al., 9 Jan 2024). For large and small , this is .
Shuffle model analyses now reach closed-form expressions and asymptotically optimal bounds (Chen et al., 9 Jan 2024).
4. Mechanism-Specific RDP Guarantees
Gaussian Mechanism
If has -sensitivity , releasing gives
Therefore, the mechanism is -RDP (Mironov, 2017, Mironov et al., 2019).
Laplace Mechanism
If has -sensitivity , releasing yields
(Mironov, 2017, Fu et al., 2023).
Truncated mechanisms (output range restricted to ) preserve the exact same RDP bounds as untruncated ones (Fu et al., 2023).
Quantized Gaussian and Mixture Mechanisms
Quantization post-Gaussian noise further tightens RDP: for quantization to levels, the privacy budget is strictly less than the standard Gaussian case and decreases with lower bit-depth (Kang et al., 16 May 2024).
Gaussian sketching ("Gaussian mixing") provides explicit, instance-adaptive RDP bounds and efficient private regression procedures. For -dimensional sketches, minimum row norm , and minimum eigenvalue ,
where and (Lev et al., 30 May 2025).
5. Advanced Composition, Adaptive Strategies, and Robustness
RDP enables adaptive composition via privacy filters and odometers that support privacy budget reallocations and early stopping, ensuring global RDP/DP budgets are not exceeded (Lécuyer, 2021). These approaches give concrete gains in private SGD—e.g., higher test accuracy when using adaptive noise/batch schedules or early-stopped optimization at the same privacy loss.
PTR mechanisms can be analyzed tightly using RDP, enabling robust private learning under data corruption and local sensitivity analyses (Wang et al., 2022).
In robust interpretability, RDP is used to quantify certifiable top- robustness properties for interpretation maps, allowing provable insensitivity to adversarial perturbations of the input (Liu et al., 2021).
6. RDP in Generative and Synthetic Data Methods
RDP is the analytic backbone for private synthetic data generation. Adding randomness via Gaussian copulas or posterior sampling can directly satisfy -RDP, as the randomness in the generative sampling "hides" individual influences (Miura et al., 2023, Geumlek et al., 2017). In deep generative models like RDP-GAN and FLIP, tight per-iteration RDP accounting yields high utility for strong privacy guarantees, supporting fairness modules and adaptive noise scaling (Ma et al., 2020, Hyrup et al., 29 Aug 2025).
Posterior sampling mechanisms, especially in exponential families and GLMs, allow explicit control of the RDP curve by tempering likelihoods or tuning priors (Geumlek et al., 2017). For synthetic data, even using non-private moments/covariance, a mechanism sampling from achieves -RDP for , which can convert to standard -DP (Miura et al., 2023).
7. Extensions: Heavy Tails, Alternative Mechanisms, and Limitations
Recent work generalizes RDP analysis to heavy-tailed SDEs, establishing the first dimension-tolerant RDP guarantees in this setting by leveraging fractional Poincaré inequalities. For processes , the RDP bound is
where is the FPI constant, is sensitivity, and is dataset size (Dupuis et al., 19 Nov 2025).
Limitations include dependence on regularity (e.g., Poincaré or log-Sobolev) assumptions, the need for positive-definiteness in covariance, and possible looseness in finite- scenarios or for non-Gaussian mechanisms.
References
- (Mironov, 2017) Ilya Mironov, "Renyi Differential Privacy"
- (Wang et al., 2018) Wang, Balle, Kasiviswanathan, "Subsampled Rényi Differential Privacy and Analytical Moments Accountant"
- (Mironov et al., 2019) Balle, Gaboardi, "Rényi Differential Privacy of the Sampled Gaussian Mechanism"
- (Lécuyer, 2021) Feldman, Zrnic, "Practical Privacy Filters and Odometers with Rényi Differential Privacy..."
- (Fu et al., 2023) Xiang, Li, "Truncated Laplace and Gaussian mechanisms of RDP"
- (Ma et al., 2020) Zhang et al., "RDP-GAN: A Rényi-Differential Privacy based Generative Adversarial Network"
- (Lev et al., 30 May 2025) Ni, Nakkiran, "The Gaussian Mixing Mechanism: Renyi Differential Privacy via Gaussian Sketches"
- (Kang et al., 16 May 2024) Liang, Song, "The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective"
- (Girgis et al., 2021) Girgis et al., "On the Renyi Differential Privacy of the Shuffle Model"
- (Wang et al., 2022) Ye, Bao, "Renyi Differential Privacy of Propose-Test-Release..."
- (Miura et al., 2023) He, Wang, "On Rényi Differential Privacy in Statistics-Based Synthetic Data Generation"
- (Dupuis et al., 19 Nov 2025) Leleu, Moulines, "Rényi Differential Privacy for Heavy-Tailed SDEs via Fractional Poincaré Inequalities"
- (Liu et al., 2021) Liu, Zhang, "Certifiably Robust Interpretation via Renyi Differential Privacy"
- (Hyrup et al., 29 Aug 2025) Thomas et al., "Achieving Hilbert-Schmidt Independence Under Rényi Differential Privacy for Fair and Private Data Generation"
- (Geumlek et al., 2017) Foulds et al., "Rényi Differential Privacy Mechanisms for Posterior Sampling"
- (Berthier et al., 2019) Cheu, Smith, "Amplifying Rényi Differential Privacy via Shuffling"
- (Chen et al., 9 Jan 2024) Zhu, Wang, "Renyi Differential Privacy in the Shuffle Model: Enhanced Amplification Bounds"
- (Balle et al., 2019) Gaboardi, "Hypothesis Testing Interpretations and Renyi Differential Privacy"
- (Girgis et al., 2021) Feldman, McMillan, Talwar, "Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning"