Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distributed Differential Privacy in Multi-Armed Bandits (2206.05772v1)

Published 12 Jun 2022 in cs.LG and cs.CR

Abstract: We consider the standard $K$-armed bandit problem under a distributed trust model of differential privacy (DP), which enables to guarantee privacy without a trustworthy server. Under this trust model, previous work largely focus on achieving privacy using a shuffle protocol, where a batch of users data are randomly permuted before sending to a central server. This protocol achieves ($\epsilon,\delta$) or approximate-DP guarantee by sacrificing an additional additive $O!\left(!\frac{K\log T\sqrt{\log(1/\delta)}}{\epsilon}!\right)!$ cost in $T$-step cumulative regret. In contrast, the optimal privacy cost for achieving a stronger ($\epsilon,0$) or pure-DP guarantee under the widely used central trust model is only $\Theta!\left(!\frac{K\log T}{\epsilon}!\right)!$, where, however, a trusted server is required. In this work, we aim to obtain a pure-DP guarantee under distributed trust model while sacrificing no more regret than that under central trust model. We achieve this by designing a generic bandit algorithm based on successive arm elimination, where privacy is guaranteed by corrupting rewards with an equivalent discrete Laplace noise ensured by a secure computation protocol. We also show that our algorithm, when instantiated with Skellam noise and the secure protocol, ensures \emph{R\'{e}nyi differential privacy} -- a stronger notion than approximate DP -- under distributed trust model with a privacy cost of $O!\left(!\frac{K\sqrt{\log T}}{\epsilon}!\right)!$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Sayak Ray Chowdhury (23 papers)
  2. Xingyu Zhou (82 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.