Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
116 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
24 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
35 tokens/sec
2000 character limit reached

FairProof : Confidential and Certifiable Fairness for Neural Networks (2402.12572v2)

Published 19 Feb 2024 in cs.LG, cs.AI, and cs.CR

Abstract: Machine learning models are increasingly used in societal applications, yet legal and privacy concerns demand that they very often be kept confidential. Consequently, there is a growing distrust about the fairness properties of these models in the minds of consumers, who are often at the receiving end of model predictions. To this end, we propose \name -- a system that uses Zero-Knowledge Proofs (a cryptographic primitive) to publicly verify the fairness of a model, while maintaining confidentiality. We also propose a fairness certification algorithm for fully-connected neural networks which is befitting to ZKPs and is used in this system. We implement \name in Gnark and demonstrate empirically that our system is practically feasible. Code is available at https://github.com/infinite-pursuits/FairProof.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Standarization. https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html.
  2. Zator: Verified inference of a 512-layer neural network using recursive snarks. https://github.com/lyronctk/zator/tree/main, 2023.
  3. Fairsquare: Probabilistic verification of program fairness. Proc. ACM Program. Lang., 1(OOPSLA), oct 2017. doi: 10.1145/3133904. URL https://doi.org/10.1145/3133904.
  4. Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
  5. Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org, 2019. http://www.fairmlbook.org.
  6. Probabilistic verification of fairness properties via concentration. Proc. ACM Program. Lang., 3(OOPSLA), oct 2019. doi: 10.1145/3360544. URL https://doi.org/10.1145/3360544.
  7. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20.
  8. Individual fairness guarantees for neural networks. In International Joint Conference on Artificial Intelligence, 2022. URL https://api.semanticscholar.org/CorpusID:248722046.
  9. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. American Economic Review, 94(4):991–1013, September 2004. doi: 10.1257/0002828042002561. URL https://www.aeaweb.org/articles?id=10.1257/0002828042002561.
  10. Fairify: Fairness verification of neural networks. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp.  1546–1558, 2023. doi: 10.1109/ICSE48619.2023.00134.
  11. Mp2ml: A mixed-protocol machine learning framework for private inference. In Proceedings of the 15th international conference on availability, reliability and security, pp.  1–10, 2020.
  12. Consensys/gnark: v0.9.0, February 2023. URL https://doi.org/10.5281/zenodo.5819104.
  13. Evaluating the predictive validity of the compas risk and needs assessment system. Criminal Justice and Behavior, 36(1):21–40, 2009. doi: 10.1177/0093854808326545. URL https://doi.org/10.1177/0093854808326545.
  14. Provable robustness of relu networks via maximization of linear regions. In the 22nd International Conference on Artificial Intelligence and Statistics, pp.  2057–2066. PMLR, 2019.
  15. J Dastin. Amazon scraps secret ai recruiting tool that showed bias against women, October 2018.
  16. Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. ArXiv, abs/1408.6491, 2014. URL https://api.semanticscholar.org/CorpusID:6817607.
  17. Individual fairness in bayesian neural networks, 2023.
  18. Distrust of artificial intelligence: Sources & responses from computer science & law. Daedalus, 151(2):309–321, 2022.
  19. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, pp.  214–226, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450311151. doi: 10.1145/2090236.2090255. URL https://doi.org/10.1145/2090236.2090255.
  20. Zen: Efficient zero-knowledge proofs for neural networks. IACR Cryptol. ePrint Arch., 2021:87, 2021. URL https://api.semanticscholar.org/CorpusID:231731893.
  21. Faking fairness via stealthily biased sampling, 2019.
  22. Experimenting with zero-knowledge proofs of training. Cryptology ePrint Archive, Paper 2023/1345, 2023. URL https://eprint.iacr.org/2023/1345. https://eprint.iacr.org/2023/1345.
  23. Justicia: A stochastic sat approach to formally verify fairness. In AAAI Conference on Artificial Intelligence, 2020. URL https://api.semanticscholar.org/CorpusID:221655566.
  24. Proofs that yield nothing but their validity or all languages in np have zero-knowledge proof systems. J. ACM, 38(3):690–728, jul 1991. ISSN 0004-5411. doi: 10.1145/116825.116852. URL https://doi.org/10.1145/116825.116852.
  25. The knowledge complexity of interactive proof-systems. In Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing, STOC ’85, pp.  291–304, New York, NY, USA, 1985. Association for Computing Machinery. ISBN 0897911512. doi: 10.1145/22145.22178. URL https://doi.org/10.1145/22145.22178.
  26. Sigma: secure gpt inference with function secret sharing. Cryptology ePrint Archive, 2023.
  27. Deep relu networks have surprisingly few activation patterns. Advances in neural information processing systems, 32, 2019.
  28. Hans Hofmann. Statlog (German Credit Data). UCI Machine Learning Repository, 1994. DOI: https://doi.org/10.24432/C5NC77.
  29. Can we obtain fairness for free? In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp.  586–596, 2021.
  30. Verifying individual fairness in machine learning models, 2020.
  31. Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes. Curran Associates Inc., Red Hook, NY, USA, 2019.
  32. {{\{{GAZELLE}}\}}: A low latency framework for secure neural network inference. In 27th USENIX Security Symposium (USENIX Security 18), pp.  1651–1669, 2018.
  33. Scaling up trustless dnn inference with zero-knowledge proofs, 2022a.
  34. Certifying some distributional fairness with subpopulation decomposition, 2022b.
  35. Consumer credit-risk models via machine-learning algorithms. Journal of Banking & Finance, 34(11):2767–2787, 2010. ISSN 0378-4266. doi: https://doi.org/10.1016/j.jbankfin.2010.06.001. URL https://www.sciencedirect.com/science/article/pii/S0378426610002372.
  36. Certifair: A framework for certified global fairness of neural networks, 2022.
  37. Blind justice: Fairness with encrypted sensitive attributes. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp.  2630–2639. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/kilbertus18a.html.
  38. vcnn: Verifiable convolutional neural network. IACR Cryptol. ePrint Arch., 2020:584, 2020. URL https://api.semanticscholar.org/CorpusID:218895602.
  39. Oblivious neural network predictions via minionn transformations. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp.  619–631, 2017.
  40. zkcnn: Zero knowledge proofs for convolutional neural network predictions and accuracy. Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021. URL https://api.semanticscholar.org/CorpusID:235349006.
  41. Online fairness auditing through iterative refinement. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD ’23, pp.  1665–1676, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400701030. doi: 10.1145/3580305.3599454. URL https://doi.org/10.1145/3580305.3599454.
  42. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), jul 2021. ISSN 0360-0300. doi: 10.1145/3457607. URL https://doi.org/10.1145/3457607.
  43. Aby3: A mixed protocol framework for machine learning. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp.  35–52, 2018.
  44. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP), pp.  19–38. IEEE, 2017.
  45. Fairness audit of machine learning models with confidential computing. In Proceedings of the ACM Web Conference 2022, WWW ’22, pp.  3488–3499, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450390965. doi: 10.1145/3485447.3512244. URL https://doi.org/10.1145/3485447.3512244.
  46. Privfair: a library for privacy-preserving fairness auditing, 2022.
  47. Dissecting deep neural networks. arXiv preprint arXiv:1910.03879, 2019.
  48. Learning certified individually fair representations. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  7584–7596. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-Paper.pdf.
  49. Fairness in the eyes of the data: Certifying machine-learning models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21, pp.  926–935, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384735. doi: 10.1145/3461702.3462554. URL https://doi.org/10.1145/3461702.3462554.
  50. Bounding and counting linear regions of deep neural networks. In International Conference on Machine Learning, pp.  4558–4566. PMLR, 2018.
  51. Confidential proof of fair training of trees. ICLR, 2023.
  52. Delphi: A cryptographic inference service for neural networks. In Proc. 29th USENIX Secur. Symp, pp.  2505–2522, 2019.
  53. zkdl: Efficient zero-knowledge proofs of deep learning training, 2023.
  54. Verifiable fairness: Privacy-preserving computation of fairness for machine learning systems. 2023. URL https://api.semanticscholar.org/CorpusID:261696588.
  55. Perfectly parallel fairness certification of neural networks. Proc. ACM Program. Lang., 4(OOPSLA), nov 2020. doi: 10.1145/3428253. URL https://doi.org/10.1145/3428253.
  56. N Vigdor. Apple card investigated after gender discrimination complaints., November, 2019.
  57. Linkedin’s job-matching ai was biased. the company’s solution? more ai., June, 2021.
  58. Pvcnn: Privacy-preserving and verifiable convolutional neural network testing. Trans. Info. For. Sec., 18:2218–2233, mar 2023. ISSN 1556-6013. doi: 10.1109/TIFS.2023.3262932. URL https://doi.org/10.1109/TIFS.2023.3262932.
  59. Traversing the local polytopes of relu neural networks: A unified approach for network verification. arXiv preprint arXiv:2111.08922, 2021.
  60. A learning-theoretic framework for certified auditing with explanations, 2022.
  61. Active fairness auditing, 2022.
  62. I-Cheng Yeh. default of credit card clients. UCI Machine Learning Repository, 2016. DOI: https://doi.org/10.24432/C55S3H.
  63. Individual fairness revisited: Transferring techniques from adversarial robustness. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI’20, 2021. ISBN 9780999241165.
  64. Training individually fair ml models with sensitive subspace robustness. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1gdkxHFDH.
  65. Zero knowledge proofs for decision tree predictions and accuracy. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, CCS ’20, pp.  2039–2053, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450370899. doi: 10.1145/3372297.3417278. URL https://doi.org/10.1145/3372297.3417278.
  66. Veriml: Enabling integrity assurances and fair payments for machine learning as a service. IEEE Transactions on Parallel and Distributed Systems, 32(10):2524–2540, 2021. doi: 10.1109/TPDS.2021.3068195.
Citations (2)

Summary

  • The paper presents a fairness certification algorithm that uses ZKPs to verify individual fairness for each input while preserving model confidentiality.
  • It transforms fairness verification into a robustness certification problem using projection distances to achieve computational efficiency.
  • The authors implement and evaluate FairProof with the Gnark library on multiple datasets, demonstrating its practical efficiency in real-world scenarios.

Overview of "\name: Confidential and Certifiable Fairness for Neural Networks"

The paper introduces \name, a system ensuring the certifiable fairness of neural networks using Zero-Knowledge Proofs (ZKPs). With the increasing use of ML models in sensitive societal applications, there has been heightened concern over the models' fairness. Often, these models must remain confidential for legal and privacy reasons, which complicates public confidence in their fairness. This paper proposes a novel cryptographic framework that enables the public to verify a model's fairness while maintaining the confidentiality of the model's internal parameters.

Key Contributions

The paper's contributions can be summarized as follows:

  1. Fairness Certification Algorithm: The authors present an algorithm tailored for fully-connected neural networks utilizing ReLU activations to compute a fairness certificate. This certificate is based on a local condition known as Individual Fairness (IF), which requires that similar individuals receive similar classifications. The model's fairness is certified for each input point separately, allowing personalized verification of fairness.
  2. ZKP-Integrable Robustness Certification: Converting the problem of certifying fairness to one of certifying robustness, a known paradigm in robustness literature, was a pivotal insight. The transformation involves using projection distances instead of exact distances in the robustness calculation to maintain efficiency when employing ZKPs.
  3. Efficient ZKP Protocol: The authors develop a ZKP protocol which proves the validity of the generated fairness certificate. This involves starting from the polytope containing the queried input and efficiently verifying each facet's distance until a boundary facet is reached while ensuring all computations are ZKP-compatible.
  4. Implementation and Evaluation: The authors implement \name using the Gnark library, providing practical feasibility evidence through empirical evaluation on several datasets, including the Adult, German Credit, and Default Credit datasets. The results demonstrate the system's capability to distinguish between fair and unfair models with computational efficiency acceptable for real-world applications.

Numerical Results

The practical evaluation reveals that \name can generate verifiable fairness certificates quickly enough for each individual query on moderately-sized neural networks. For instance, using an Intel-i9 CPU, the algorithm took approximately 1.17 minutes to process each data point for the German Credit dataset.

Implications and Future Directions

The implications of this work are manifold. The authors extend the capabilities of neural networks by integrating cryptographic proofs to ensure fairness without compromising model confidentiality, a critical requirement in consumer-facing applications. Practically, this enables models to be deployed in environments where users' trust might otherwise be insufficient due to opacity or lack of fairness guarantees.

Theoretically, the alignment of fairness certification with robustness provides a foundation for future exploration of related cryptographic and learning-theoretic problems. The adaptation of ZKPs for model fairness verification invites further scalability improvements and optimizations that could extend these methods to larger-scale neural architectures and other model classes.

Future developments may explore extending the system to handle models with more complex architectures or investigating alternate definitions of fairness beyond local individual fairness. Additionally, the potential of incorporating parallelism or hardware acceleration tools could significantly improve the computational cost, facilitating wider adoption in industry settings.

In summary, \name represents a meaningful step toward integrating fairness guarantees in ML systems while upholding privacy, a crucial need in today's society. The authors’ approach could serve as a reference point for developing similar verification systems in other ML applications requiring fairness assurances.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.