Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 72 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 43 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 107 tok/s Pro
Kimi K2 219 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

A Cross Entropy Interpretation of R{é}nyi Entropy for $α$-leakage (2401.15202v1)

Published 26 Jan 2024 in cs.IT and math.IT

Abstract: This paper proposes an $\alpha$-leakage measure for $\alpha\in[0,\infty)$ by a cross entropy interpretation of R{\'{e}}nyi entropy. While R\'{e}nyi entropy was originally defined as an $f$-mean for $f(t) = \exp((1-\alpha)t)$, we reveal that it is also a $\tilde{f}$-mean cross entropy measure for $\tilde{f}(t) = \exp(\frac{1-\alpha}{\alpha}t)$. Minimizing this R\'{e}nyi cross-entropy gives R\'{e}nyi entropy, by which the prior and posterior uncertainty measures are defined corresponding to the adversary's knowledge gain on sensitive attribute before and after data release, respectively. The $\alpha$-leakage is proposed as the difference between $\tilde{f}$-mean prior and posterior uncertainty measures, which is exactly the Arimoto mutual information. This not only extends the existing $\alpha$-leakage from $\alpha \in [1,\infty)$ to the overall R{\'{e}}nyi order range $\alpha \in [0,\infty)$ in a well-founded way with $\alpha=0$ referring to nonstochastic leakage, but also reveals that the existing maximal leakage is a $\tilde{f}$-mean of an elementary $\alpha$-leakage for all $\alpha \in [0,\infty)$, which generalizes the existing pointwise maximal leakage.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. F. du Pin Calmon and N. Fawaz, “Privacy against statistical inference,” in Proceedings of 50th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2012, pp. 1401–1408.
  2. I. Issa, A. B. Wagner, and S. Kamath, “An operational approach to information leakage,” IEEE Transactions on Information Theory, vol. 66, no. 3, pp. 1625–1657, Mar. 2020.
  3. J. Liao, O. Kosut, L. Sankar, and F. du Pin Calmon, “Tunable measures for information leakage and applications to privacy-utility tradeoffs,” IEEE Transactions on Information Theory, vol. 65, no. 12, pp. 8043–8066, Dec. 2019.
  4. A. Rényi, “On measures of entropy and information,” in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, vol. 4.   University of California Press, 1961, pp. 547–562.
  5. S. Arimoto, “Information measures and capacity of order α𝛼\alphaitalic_α for discrete memoryless channels,” Topics in information theory, 1977.
  6. S. Verdú, “α𝛼\alphaitalic_α-mutual information,” in Proceedings of Information Theory and Applications Workshop, San Diego, CA, 2015, pp. 1–6.
  7. R. Sibson, “Information radius,” Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, vol. 14, no. 2, pp. 149–160, 1969.
  8. G. R. Kurri, L. Sankar, and O. Kosut, “An operational approach to information leakage via generalized gain functions,” IEEE Transactions on Information Theory, vol. 70, no. 2, pp. 1349–1375, Dec. 2023.
  9. T. Sypherd, M. Diaz, J. K. Cava, G. Dasarathy, P. Kairouz, and L. Sankar, “A tunable loss function for robust classification: Calibration, landscape, and generalization,” IEEE Transactions on Information Theory, vol. 68, no. 9, pp. 6021–6051, Sep. 2022.
  10. N. Ding and F. Farokhi, “Developing non-stochastic privacy-preserving policies using agglomerative clustering,” IEEE Transactions on Information Forensics and Security, pp. 3911–3923, Jun. 2020.
  11. F. Farokhi and N. Ding, “Measuring information leakage in non-stochastic brute-force guessing,” in Proceedings of IEEE Information Theory Workshop, Riva del Garda, 2021, pp. 1–5.
  12. S. Verdú, “Error exponents and α𝛼\alphaitalic_α-mutual information,” Entropy, vol. 23, no. 2, p. 199, Feb. 2021.
  13. B. Nakiboglu, “The Rényi capacity and center,” IEEE Transactions on Information Theory, vol. 65, no. 2, pp. 841–860, Feb. 2019.
  14. S. Saeidian, G. Cervia, T. J. Oechtering, and M. Skoglund, “Pointwise maximal leakage,” IEEE Transactions on Information Theory, vol. 69, no. 12, pp. 8054–8080, Dec. 2023.
  15. S. Fehr and S. Berens, “On the conditional Rényi entropy,” IEEE Transactions on Information Theory, vol. 60, no. 11, pp. 6801–6810, Nov. 2014.
  16. G. Aishwarya and M. Madiman, “Remarks on Rényi versions of conditional entropy and mutual information,” in Proceedings of IEEE International Symposium on Information Theory, Paris, 2019, pp. 1117–1121.
  17. ——, “Conditional Rényi entropy and the relationships between Rényi capacities,” Entropy, vol. 22, no. 5, p. 526, May 2020.
  18. F. Valverde-Albacete and C. Peláez-Moreno, “The case for shifting the Rényi entropy,” Entropy, vol. 21, no. 1, p. 46, Jan. 2019.
  19. F. C. Thierrin, F. Alajaji, and T. Linder, “On the Rényi cross-entropy,” in Proceedings of 17th Canadian Workshop on Information Theory, Ottawa, ON, 2022, pp. 1–5.
  20. I. Csiszar, “Generalized cutoff rates and Rényi’s information measures,” IEEE Transactions on Information Theory, vol. 41, no. 1, pp. 26–34, Jan. 1995.
  21. Y. Polyanskiy and S. Verdu, “Arimoto channel coding converse and Rényi divergence,” in Proceedings of 48th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2010, pp. 1327–1333.
  22. N. Ding, M. A. Zarrabian, and P. Sadeghi, “α𝛼\alphaitalic_α-information-theoretic privacy watchdog and optimal privatization scheme,” in Proceedings of IEEE International Symposium on Information Theory, Melbourne, 2021, pp. 2584–2589.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.