An Extension of the Adversarial Threat Model in Quantitative Information Flow (2409.04108v3)
Abstract: In this paper, we propose an extended framework for quantitative information flow (QIF), aligned with the previously proposed core-concave generalization of entropy measures, to include adversaries that use Kolmogorov-Nagumo $f$-mean to infer secrets in a private system. Specifically, in our setting, an adversary uses Kolmogorov-Nagumo $f$-mean to compute its best actions before and after observing the system's randomized outputs. This leads to generalized notions of prior and posterior vulnerability and generalized axiomatic relations that we will derive to elucidate how these $f$-mean based vulnerabilities interact with each other. We demonstrate the usefulness of this framework by showing how some notions of leakage that had been derived outside of the QIF framework and so far seemed incompatible with it are indeed explainable via such an extension of QIF. These leakage measures include $\alpha$-leakage, which is the same as Arimoto mutual information of order $\alpha$, maximal $\alpha$-leakage, which is the $\alpha$-leakage capacity, and maximal $(\alpha,\beta)$-leakage, which is a generalization of the above and captures local differential privacy as a special case. We define the notion of generalized capacity and provide partial results for special classes of functions used in the Kolmogorov-Nagumo mean. We also propose a new pointwise notion of gain function, which we coin pointwise information gain. We show that this pointwise information gain can explain R{\'e}yni divergence and Sibson mutual information of order $\alpha \in [0,\infty]$ as the Kolmogorov-Nagumo average of the gain with a proper choice of function $f$.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.