Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 105 tok/s Pro
Kimi K2 182 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 40 tok/s Pro
2000 character limit reached

An Extension of the Adversarial Threat Model in Quantitative Information Flow (2409.04108v3)

Published 6 Sep 2024 in cs.IT and math.IT

Abstract: In this paper, we propose an extended framework for quantitative information flow (QIF), aligned with the previously proposed core-concave generalization of entropy measures, to include adversaries that use Kolmogorov-Nagumo $f$-mean to infer secrets in a private system. Specifically, in our setting, an adversary uses Kolmogorov-Nagumo $f$-mean to compute its best actions before and after observing the system's randomized outputs. This leads to generalized notions of prior and posterior vulnerability and generalized axiomatic relations that we will derive to elucidate how these $f$-mean based vulnerabilities interact with each other. We demonstrate the usefulness of this framework by showing how some notions of leakage that had been derived outside of the QIF framework and so far seemed incompatible with it are indeed explainable via such an extension of QIF. These leakage measures include $\alpha$-leakage, which is the same as Arimoto mutual information of order $\alpha$, maximal $\alpha$-leakage, which is the $\alpha$-leakage capacity, and maximal $(\alpha,\beta)$-leakage, which is a generalization of the above and captures local differential privacy as a special case. We define the notion of generalized capacity and provide partial results for special classes of functions used in the Kolmogorov-Nagumo mean. We also propose a new pointwise notion of gain function, which we coin pointwise information gain. We show that this pointwise information gain can explain R{\'e}yni divergence and Sibson mutual information of order $\alpha \in [0,\infty]$ as the Kolmogorov-Nagumo average of the gain with a proper choice of function $f$.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.