$α$-leakage by Rényi Divergence and Sibson Mutual Information (2405.00423v8)
Abstract: For $\tilde{f}(t) = \exp(\frac{\alpha-1}{\alpha}t)$, this paper proposes a $\tilde{f}$-mean information gain measure. R\'{e}nyi divergence is shown to be the maximum $\tilde{f}$-mean information gain incurred at each elementary event $y$ of channel output $Y$ and Sibson mutual information is the $\tilde{f}$-mean of this $Y$-elementary information gain. Both are proposed as $\alpha$-leakage measures, indicating the most information an adversary can obtain on sensitive data. It is shown that the existing $\alpha$-leakage by Arimoto mutual information can be expressed as $\tilde{f}$-mean measures by a scaled probability. Further, Sibson mutual information is interpreted as the maximum $\tilde{f}$-mean information gain over all estimation decisions applied to channel output.
- F. du Pin Calmon and N. Fawaz, “Privacy against statistical inference,” in Proceedings of 50th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2012, pp. 1401–1408.
- L. Sankar, S. R. Rajagopalan, and H. V. Poor, “Utility-privacy tradeoffs in databases: An information-theoretic approach,” IEEE Transactions on Information Forensics and Security, vol. 8, no. 6, pp. 838–852, Jun. 2013.
- I. Issa, A. B. Wagner, and S. Kamath, “An operational approach to information leakage,” IEEE Transactions on Information Theory, vol. 66, no. 3, pp. 1625–1657, Mar. 2020.
- J. Liao, O. Kosut, L. Sankar, and F. du Pin Calmon, “Tunable measures for information leakage and applications to privacy-utility tradeoffs,” IEEE Transactions on Information Theory, vol. 65, no. 12, pp. 8043–8066, Dec. 2019.
- N. Ding, M. A. Zarrabian, and P. Sadeghi, “A cross entropy interpretation of Rényi entropy for α𝛼\alphaitalic_α-leakage,” in Proceedings of IEEE International Symposium on Information Theory, Athens, 2024. [Online]. Available: https://arxiv.org/abs/2401.15202
- A. Rényi, “On measures of entropy and information,” in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, vol. 4. University of California Press, 1961, pp. 547–562.
- S. Arimoto, “Information measures and capacity of order α𝛼\alphaitalic_α for discrete memoryless channels,” Topics in information theory, 1977.
- R. Sibson, “Information radius,” Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, vol. 14, no. 2, pp. 149–160, 1969.
- S. Verdú, “Error exponents and α𝛼\alphaitalic_α-mutual information,” Entropy, vol. 23, no. 2, p. 199, Feb. 2021.
- S. Saeidian, G. Cervia, T. J. Oechtering, and M. Skoglund, “Pointwise maximal leakage,” IEEE Transactions on Information Theory, vol. 69, no. 12, pp. 8054–8080, Dec. 2023.
- S. Arimoto, “Computation of random coding exponent functions,” IEEE Transactions on Information Theory, vol. 22, no. 6, pp. 665–671, Nov. 1976.
- R. Gallager, “A simple derivation of the coding theorem and some applications,” IEEE Transactions on Information Theory, vol. 11, no. 1, pp. 3–18, Jan. 1965.
- A. Kołgomorov, “Sur la notion de la moyenne, alti accad,” Naz. Lincei, vol. 12, no. 6, pp. 388–391, 1930.
- M. Nagumo, “Über eine klasse der mittelwerte,” Japanese journal of mathematics: transactions and abstracts, vol. 7, no. 0, pp. 71–79, 1930.
- N. Ding, M. A. Zarrabian, and P. Sadeghi, “α𝛼\alphaitalic_α-information-theoretic privacy watchdog and optimal privatization scheme,” in Proceedings of IEEE International Symposium on Information Theory, Melbourne, 2021, pp. 2584–2589.
- M. A. Zarrabian, N. Ding, P. Sadeghi, and T. Rakotoarivelo, “Enhancing utility in the watchdog privacy mechanism,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Singapore, 2022, pp. 2979–2983.
- H. Hsu, S. Asoodeh, and F. P. Calmon, “Information-theoretic privacy watchdogs,” in Proceedings of IEEE International Symposium on Information Theory, Paris, 2019, pp. 552–556.
- M. A. Zarrabian, N. Ding, and P. Sadeghi, “Asymmetric local information privacy and the watchdog mechanism,” in Proceedings of IEEE Information Theory Workshop, Mumbai, 2022, pp. 7–12.
- ——, “On the lift, related privacy measures, and applications to privacy–utility trade-offs,” Entropy, vol. 25, no. 4, p. 679, Apr. 2023.
- Y. Polyanskiy and S. Verdu, “Arimoto channel coding converse and Rényi divergence,” in Proceedings of 48th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, 2010, pp. 1327–1333.
- S. Verdú, “α𝛼\alphaitalic_α-mutual information,” in Proceedings of Information Theory and Applications Workshop, San Diego, CA, 2015, pp. 1–6.
- I. Csiszar, “Generalized cutoff rates and Rényi’s information measures,” IEEE Transactions on Information Theory, vol. 41, no. 1, pp. 26–34, Jan. 1995.
- S. Arimoto, “An algorithm for computing the capacity of arbitrary discrete memoryless channels,” IEEE Transactions on Information Theory, vol. 18, no. 1, pp. 14–20, Jan. 1972.
- R. Blahut, “Computation of channel capacity and rate-distortion functions,” IEEE Transactions on Information Theory, vol. 18, no. 4, pp. 460–473, Jul. 1972.
- I. Csiszár, “A class of measures of informativity of observation channels,” Periodica Mathematica Hungarica, vol. 2, no. 1–4, pp. 191–213, Mar. 1972.
- G. Aishwarya and M. Madiman, “Conditional Rényi entropy and the relationships between Rényi capacities,” Entropy, vol. 22, no. 5, p. 526, May 2020.
- B. Nakiboglu, “The Rényi capacity and center,” IEEE Transactions on Information Theory, vol. 65, no. 2, pp. 841–860, Feb. 2019.
- M. Hayashi and V. Y. F. Tan, “Equivocations, exponents, and second-order coding rates under various rényi information measures,” IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 975–1005, Feb. 2017.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.