Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Smoothed Differential Privacy (2107.01559v4)

Published 4 Jul 2021 in cs.CR and cs.LG

Abstract: Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis. Often, DP classifies most mechanisms without additive noise as non-private (Dwork et al., 2014). Thus, additive noises are added to improve privacy (to achieve DP). However, in many real-world applications, adding additive noise is undesirable (Bagdasaryan et al., 2019) and sometimes prohibited (Liu et al., 2020). In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis (Spielman & Teng, May 2004). Our notion, smoothed DP, can effectively measure the privacy leakage of mechanisms without additive noises under realistic settings. We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP. In addition, we prove several desirable properties of smoothed DP, including composition, robustness to post-processing, and distribution reduction. Based on those properties, we propose an efficient algorithm to calculate the privacy parameters for smoothed DP. Experimentally, we verify that, according to smoothed DP, the discrete sampling mechanisms are private in real-world elections, and some discrete neural networks can be private without adding any additive noise. We believe that these results contribute to the theoretical foundation of realistic privacy measures beyond worst-case analysis.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (75)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp.  308–318, 2016.
  2. QSGD: communication-efficient sgd via gradient quantization and encoding. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  1707–1718, 2017.
  3. Fixed point optimization of deep convolutional neural networks for object recognition. In Proc. Int. Conf. on Acoust., Speech & Signal Process., pp.  1131–1135, 2015.
  4. Differential privacy has disparate impact on model accuracy. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  15479–15488, 2019.
  5. Privacy profiles and amplification by subsampling. J. of Privacy and Confidentiality, 10(1), 2020.
  6. Weight quantization in boltzmann machines. Neural Netw., 4(3):405–409, Jan. 1991.
  7. Scalable methods for 8-bit training of neural networks. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  5151–5159, 2018.
  8. Coupled-worlds privacy: Exploiting adversarial uncertainty in statistical data privacy. In Proc. Annu. Symp. on Found. of Comput. Sci., pp.  439–448, 2013.
  9. Towards reality: Smoothed analysis in comput. social choice. In Proc. Int. Conf. Auton. Agents & Multiagent Syst., pp.  1691–1695, 2020.
  10. Applying differential privacy to matrix factorization. In Proceedings of the 9th ACM Conference on Recommender Systems, pp.  107–114, 2015.
  11. Smoothed analysis of tensor decompositions. In Proc. Annu. ACM Symp. on Theory of Comput., pp.  594–603, 2014.
  12. Smoothed Analysis of the Perceptron Algorithm for Linear Programming. Pittsburgh, PA, USA: Carnegie Mellon University, 2002.
  13. USPS ballot problems unlikely to change outcomes in competitive states. The Washington Post, 2020. URL https://www.washingtonpost.com/business/2020/11/04/ballot-election-problems-usps/. Accessed: Mar. 28, 2023.
  14. George EP Box. Robustness in the Strategy of Scientific Model Building. Amsterdam, Netherlands: Elsevier, 1979.
  15. Smoothed analysis of belief propagation for minimum-cost flow and matching. In Proc. Int. Workshop on Algorithms and Comput., pp.  182–193, 2013.
  16. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Proc. Theory of Cryptogr. Conf., pp.  635–658, 2016.
  17. Average-case averages: private algorithms for smooth sensitivity and mean estimation. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  181–191, 2019.
  18. Binaryconnect: training deep neural networks with binary weights during propagations. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  3123–3131, 2015.
  19. Gaussian differential privacy. arXiv:1905.02383, 2019.
  20. High-dimensional stochastic gradient quantization for communication-efficient edge learning. IEEE Trans. on Signal Process., 68:2128–2142, Mar. 2020.
  21. Concentrated differential privacy. arXiv:1603.01887, 2016.
  22. Our data, ourselves: Privacy via distributed noise generation. In Proc. Annu. Int. Conf. on the Theory and Appl. of Cryptographic Techn., pp.  486–503, 2006a.
  23. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pp.  265–284. Springer, 2006b.
  24. The algorithmic foundations of differential privacy. Found. and Trends in Theor. Comput. Sci., 9(3-4):211–407, 2014.
  25. Weight discretization paradigm for optical neural networks. In Proc. Opt. Interconnections & Netw., volume 1281, pp.  164–173, 1990.
  26. Comparing approximate and probabilistic differential privacy parameters. Inf. Proc. Lett., 182:106380, 2023. URL https://www.sciencedirect.com/science/article/pii/S0020019023000236.
  27. Yunhui Guo. A survey on methods and theories of quantized neural networks. arXiv:1808.04752, 2018.
  28. Smoothed analysis of online and differentially private learning. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  9203–9215, 2020.
  29. Random differential privacy. arXiv:1112.2680, 2011.
  30. Adversarially robust streaming algorithms via differential privacy. Journal of the ACM, 69(6):1–14, 2022.
  31. Binarized neural networks. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  4114–4122, 2016.
  32. Quantized neural networks: Training neural networks with low precision weights and activations. The J. of Mach. Learn. Res., 18(1):6869–6898, 2017.
  33. Differential privacy and machine learning: a survey and review. arXiv preprint arXiv:1412.7584, 2014.
  34. The composition theorem for differential privacy. In Proc. Int. Conf. on Mach. Learn., pp.  1376–1385, 2015.
  35. Learning and smoothed analysis. In 2009 50th Annu. IEEE Symp. on Found. of Comput. Sci., pp.  395–404, 2009.
  36. Bitwise neural networks. arXiv:1601.06071, 2016.
  37. The ultimate planar convex hull algorithm? SIAM J. on Comput., 15(1):287–299, 1986.
  38. Discrete sequence prediction and its applications. Mach. Learn., 15(1):43–68, Apr. 1994.
  39. Certified robustness to adversarial examples with differential privacy. In Proc. Symp. on Secur. and Privacy (SP), pp.  656–672, 2019.
  40. Dave Leip. Dave leip’s atlas of the us presidential elections, 2023. URL https://uselectionatlas.org/. Accessed: Mar. 28, 2023.
  41. Differentially private condorcet voting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp.  5755–5763, 2023.
  42. Fixed point quantization of deep convolutional networks. In Proc. Int. Conf. on Mach. Learn., pp.  2849–2858, 2016.
  43. Towards accurate binary convolutional neural network. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  344–352, 2017.
  44. Ao Liu and Lirong Xia. The semi-random likelihood of doctrinal paradoxes. In Proc. AAAI Conf. Artif. Intell., pp.  5124–5132, 2022.
  45. Differential privacy for eye-tracking data. In Proc. of the 11th ACM Symp. on Eye Tracking Res. & Appl., pp.  1–10, 2019.
  46. How private are commonly-used voting rules? In Proc. Conf. on Uncertainty in Artif. Intell., pp.  629–638, 2020.
  47. Interpretation maps with guaranteed robustness, May 24 2022a. US Patent 11,341,598.
  48. Certifiably robust interpretation, March 3 2022b. US Patent App. 17/005,144.
  49. Privacy: Theory meets practice on the map. In IEEE 24th Int. Conf. on Data Eng., pp.  277–286. IEEE, 2008.
  50. Worst-case and smoothed analysis of k-means clustering with bregman divergences. In Proc. Int. Symp. on Algorithms and Comput., pp.  1024–1033, 2009.
  51. Fast neural networks without multipliers. IEEE Trans. on Neural Netw., 4(1):53–62, Jan. 1993.
  52. Ilya Mironov. Rényi differential privacy. In Proc. Comput. Secur. Found. Symp., pp.  263–275, 2017.
  53. WRPN: Wide reduced-precision networks. In Proc. Int. Conf. on Learn. Representations, pp.  1–11, 2018.
  54. Differential privacy in practice. J. of Comput. Sci. & Engineering, 7(3):177–186, Sep. 2013.
  55. Smooth sensitivity and sampling in private data analysis. In Proc. Annu. ACM Symp. on Theory of Comput., pp.  75–84, 2007.
  56. Scalable private learning with pate. In Proc. Int. Conf. on Learn. Representations, pp.  1–11, 2018.
  57. Xnor-net: Imagenet classification using binary convolutional neural networks. In Proc. Eur. Conf. on Comput. Vision, pp.  525–542, 2016.
  58. Signal processing and machine learning with differential privacy: Algorithms and challenges for continuous data. IEEE signal processing magazine, 30(5):86–94, 2013.
  59. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns. In Proc. Annu. Conf. of the Int. Speech Commun. Assoc., pp.  1058–1062, 2014.
  60. Privacy enhanced matrix factorization for recommendation with local differential privacy. IEEE Transactions on Knowledge and Data Engineering, 30(9):1770–1782, 2018.
  61. Steven W Smith. The scientist and engineer’s guide to digital signal processing, 1997.
  62. Daniel A Spielman. The smoothed analysis of algorithms. In Proc. Int. Symp. on Fundam. of Comput. Theory, pp.  17–18, 2005.
  63. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. J. of the ACM, 51(3):385–463, 2004.
  64. Multilayer feedforward neural networks with single powers-of-two weights. IEEE Trans. on Signal Process., 41(8):2724–2727, Aug. 1993.
  65. Privacy loss in apple’s implementation of differential privacy on macos 10.12. arXiv preprint arXiv:1709.02753, 2017.
  66. Differentially private feature selection via stability arguments, and the robustness of the lasso. In Proc. Conf. on Learn. Theory, pp.  819–850, 2013.
  67. Bayesian differential privacy for machine learning. In Proc. Int. Conf. on Mach. Learn., pp.  9583–9592, 2020.
  68. Improving the speed of neural networks on cpus. In Proc. Deep Learn. & Unsupervised Feature Learn. NIPS Workshop, pp.  4, 2011.
  69. Certified robustness to word substitution attack with differential privacy. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.  1102–1112, 2021.
  70. Yu-Xiang Wang. Per-instance differential privacy. J. of Privacy and Confidentiality, 9(1), 2019.
  71. A statistical framework for differential privacy. J. of the Amer. Statistical Assoc., 105(489):375–389, Mar. 2010.
  72. Lirong Xia. The smoothed possibility of social choice. In Proc. Int. Conf. on Neural Inf. Process. Syst., pp.  11044–11055, 2020.
  73. Lirong Xia. How likely are large elections tied? In Proc. ACM Conf. on Econ. & Comput., pp.  884–885, 2021.
  74. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv:1702.03044, 2017.
  75. Towards unified int8 training for convolutional neural network. In Proc. the IEEE/CVF Conf. on Comput. Vision & Pattern Recognit., pp.  1969–1979, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Ao Liu (54 papers)
  2. Yu-Xiang Wang (124 papers)
  3. Lirong Xia (78 papers)

Summary

We haven't generated a summary for this paper yet.