Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 175 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Privacy Guarantees in Posterior Sampling under Contamination (2403.07772v2)

Published 12 Mar 2024 in math.ST and stat.TH

Abstract: In recent years differential privacy has been adopted by tech-companies and governmental agencies as the standard for measuring privacy in algorithms. In this article, we study differential privacy in Bayesian posterior sampling settings. We begin by considering differential privacy in the most common privatization setting in which Laplace or Gaussian noise is simply injected into the output. In an effort to achieve better differential privacy, we consider adopting {\em Huber's contamination model} for use within privacy settings, and replace at random data points with samples from a heavy-tailed distribution ({\em instead} of injecting noise into the output). We derive bounds for the differential privacy level $(\epsilon,\delta)$ of our approach, without the need to impose the restriction of having a bounded observation and parameter space which is commonly used by existing approaches and literature. We further consider for our approach the effect of sample size on the privacy level and the convergence rate of $(\epsilon,\delta)$ to zero. Asymptotically, our contamination approach is fully private at no cost of information loss. We also provide some examples depicting inference models that our setup is applicable to with a theoretical estimation of the convergence rate, together with some simulations.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.