Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Differentially Private Fair Binary Classifications (2402.15603v2)

Published 23 Feb 2024 in cs.LG, cs.CR, cs.IT, math.IT, and stat.ML

Abstract: In this work, we investigate binary classification under the constraints of both differential privacy and fairness. We first propose an algorithm based on the decoupling technique for learning a classifier with only fairness guarantee. This algorithm takes in classifiers trained on different demographic groups and generates a single classifier satisfying statistical parity. We then refine this algorithm to incorporate differential privacy. The performance of the final algorithm is rigorously examined in terms of privacy, fairness, and utility guarantees. Empirical evaluations conducted on the Adult and Credit Card datasets illustrate that our algorithm outperforms the state-of-the-art in terms of fairness guarantees, while maintaining the same level of privacy and utility.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006.
  2. Cynthia Dwork. Differential privacy. In International colloquium on automata, languages, and programming, pages 1–12. Springer, 2006.
  3. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security, pages 1054–1067. ACM, 2014.
  4. Differential privacy team Apple. Learning with privacy at scale, 2017.
  5. Guidelines for implementing and auditing differentially private systems. ArXiv, abs/2002.04049, 2020.
  6. Linkedin’s audience engagements api: A privacy preserving data analytics system at scale. arXiv preprint arXiv:2002.05839, 2020.
  7. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015.
  8. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.
  9. Decoupled classifiers for group-fair and efficient machine learning. In Conference on fairness, accountability and transparency, pages 119–133. PMLR, 2018.
  10. To split or not to split: The impact of disparate treatment in classification. IEEE Transactions on Information Theory, 67(10):6733–6757, 2021.
  11. Differential privacy has disparate impact on model accuracy. Advances in neural information processing systems, 32, 2019.
  12. Fair decision making using privacy-protected data. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 189–199, 2020.
  13. A simple and practical method for reducing the disparate impact of differential privacy. arXiv preprint arXiv:2312.11712, 2023.
  14. On the compatibility of privacy and fairness. In Adjunct publication of the 27th conference on user modeling, adaptation and personalization, pages 309–315, 2019.
  15. Differential privacy has bounded impact on fairness in classification. In International Conference on Machine Learning, pages 23681–23705. PMLR, 2023.
  16. On the privacy risks of algorithmic fairness. In 2021 IEEE European Symposium on Security and Privacy (EuroS&P), pages 292–303. IEEE, 2021.
  17. Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy. In Proceedings of the 2020 workshop on privacy-preserving machine learning in practice, pages 15–19, 2020.
  18. Sushant Agarwal. Trade-offs between fairness, interpretability, and privacy in machine learning. UWSpace, 2020. http://hdl.handle.net/10012/15861.
  19. Achieving differential privacy and fairness in logistic regression. In Companion proceedings of The 2019 world wide web conference, pages 594–599, 2019.
  20. Differentially private fair learning. In International Conference on Machine Learning, pages 3000–3008. PMLR, 2019.
  21. Differentially private and fair classification via calibrated functional mechanism. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 622–629, 2020.
  22. Fair learning with private demographic data. In International Conference on Machine Learning, pages 7066–7075. PMLR, 2020.
  23. Differentially private and fair deep learning: A lagrangian dual approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9932–9939, 2021.
  24. Sf-pate: scalable, fair, and private aggregation of teacher ensembles. arXiv preprint arXiv:2204.05157, 2022.
  25. Removing disparate impact on model accuracy in differentially private stochastic gradient descent. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1924–1932, 2021.
  26. Differentially private empirical risk minimization under the fairness lens. Advances in Neural Information Processing Systems, 34:27555–27565, 2021.
  27. Disparate impact in differential privacy from gradient misalignment. In The Eleventh International Conference on Learning Representations, 2023.
  28. Stochastic differentially private and fair learning. In International Conference on Learning Representations, 2023.
  29. Learning with impartiality to walk on the pareto frontier of fairness, privacy, and utility. arXiv preprint arXiv:2302.09183, 2023.
  30. Wasserstein fair classification. In Uncertainty in artificial intelligence, pages 862–872. PMLR, 2020.
  31. Inherent tradeoffs in learning fair representations. The Journal of Machine Learning Research, 23(1):2527–2552, 2022.
  32. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
  33. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
  34. M. Lichman. Uci machine learning repository, 2013.
  35. Opacus: User-friendly differential privacy library in pytorch. 2021.
  36. Numerical composition of differential privacy. Advances in Neural Information Processing Systems, 34:11631–11642, 2021.
  37. Gaussian differential privacy. arXiv preprint arXiv:1905.02383, 2019.
  38. Deep learning with gaussian differential privacy. Harvard data science review, 2020(23):10–1162, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hrad Ghoukasian (1 paper)
  2. Shahab Asoodeh (33 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com