Papers
Topics
Authors
Recent
2000 character limit reached

BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks (2402.00906v2)

Published 1 Feb 2024 in cs.CR, cs.LG, and cs.NE

Abstract: With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, concerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data. Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. Neuromorphic architectures have emerged as a paradigm shift in neural computing, enabling asynchronous and energy-efficient computation. However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties, especially against gradient-based attacks. To investigate this hypothesis, we propose a thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we develop novel inversion attack strategies that are comprehensively designed to target SNNs, offering a comparative analysis with their conventional ANN counterparts. Our experiments, conducted on diverse event-based and static datasets, demonstrate the effectiveness of the proposed attack strategies and therefore questions the assumption of inherent privacy-preserving in neuromorphic architectures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 1701–1708.
  2. A. Qayyum, J. Qadir, M. Bilal, and A. Al-Fuqaha, “Secure and robust machine learning for healthcare: A survey,” IEEE Reviews in Biomedical Engineering, vol. 14, pp. 156–180, 2021.
  3. N. Subbanna, M. Wilms, A. Tuladhar, and N. D. Forkert, “An analysis of the vulnerability of two common deep learning-based medical image segmentation techniques to model inversion attacks,” Sensors, vol. 21, no. 11, p. 3874, 2021.
  4. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 3–18.
  5. J. Ye, A. Maddi, S. K. Murakonda, V. Bindschaedler, and R. Shokri, “Enhanced membership inference attacks against machine learning models,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’22.   New York, NY, USA: Association for Computing Machinery, 2022, p. 3093–3106. [Online]. Available: https://doi.org/10.1145/3548606.3560675
  6. M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An {{\{{End-to-End}}\}} case study of personalized warfarin dosing,” in 23rd USENIX security symposium (USENIX Security 14), 2014, pp. 17–32.
  7. K.-C. Wang, Y. FU, K. Li, A. Khisti, R. Zemel, and A. Makhzani, “Variational model inversion attacks,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34.   Curran Associates, Inc., 2021, pp. 9706–9719. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf
  8. S. Chen, M. Kahla, R. Jia, and G.-J. Qi, “Knowledge-enriched distributional model inversion attacks,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 16 158–16 167.
  9. M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1322–1333.
  10. Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: Generative model-inversion attacks against deep neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 253–261.
  11. K.-C. Wang, Y. Fu, K. Li, A. Khisti, R. Zemel, and A. Makhzani, “Variational model inversion attacks,” Advances in Neural Information Processing Systems, vol. 34, pp. 9706–9719, 2021.
  12. S. Chen, M. Kahla, R. Jia, and G.-J. Qi, “Knowledge-enriched distributional model inversion attacks,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 16 178–16 187.
  13. N.-B. Nguyen, K. Chandrasegaran, M. Abdollahzadeh, and N.-M. Cheung, “Re-thinking model inversion attacks against deep neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 384–16 393.
  14. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” Ieee Micro, vol. 38, no. 1, pp. 82–99, 2018.
  15. E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 51–63, 2019.
  16. J. Wang, D. Zhao, G. Shen, Q. Zhang, and Y. Zeng, “Dpsnn: A differentially private spiking neural network,” arXiv preprint arXiv:2205.12718, 2022.
  17. J. Fu, Z. Liao, and J. Wang, “Memristor-based neuromorphic hardware improvement for privacy-preserving ann,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 27, no. 12, pp. 2745–2754, 2019.
  18. P. Li, T. Gao, H. Huang, J. Cheng, S. Gao, Z. Zeng, and J. Duan, “Privacy-preserving discretized spiking neural networks,” arXiv preprint arXiv:2308.12529, 2023.
  19. X. Luo, Q. Fu, S. Qin, and K. Wang, “Encrypted-snn: A privacy-preserving method for converting artificial neural networks to spiking neural networks,” in Neural Information Processing, B. Luo, L. Cheng, Z.-G. Wu, H. Li, and C. Li, Eds.   Singapore: Springer Nature Singapore, 2024, pp. 519–530.
  20. Y. Kim, Y. Venkatesha, and P. Panda, “Privatesnn: Privacy-preserving spiking neural networks,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 1, pp. 1192–1200, Jun. 2022. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/20005
  21. C. D. Schuman, S. R. Kulkarni, M. Parsa, J. P. Mitchell, P. Date, and B. Kay, “Opportunities for neuromorphic computing algorithms and applications,” Nature Computational Science, vol. 2, no. 1, pp. 10–19, 2022.
  22. J. K. Eshraghian, M. Ward, E. O. Neftci, X. Wang, G. Lenz, G. Dwivedi, M. Bennamoun, D. S. Jeong, and W. D. Lu, “Training spiking neural networks using lessons from deep learning,” Proceedings of the IEEE, 2023.
  23. G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long short-term memory and learning-to-learn in networks of spiking neurons,” Advances in neural information processing systems, vol. 31, 2018.
  24. F. Zenke and E. O. Neftci, “Brain-inspired learning on neuromorphic substrates,” Proceedings of the IEEE, vol. 109, no. 5, pp. 935–950, 2021.
  25. L. Liang, X. Hu, L. Deng, Y. Wu, G. Li, Y. Ding, P. Li, and Y. Xie, “Exploring adversarial attack in spiking neural networks with spike-compatible gradient,” IEEE transactions on neural networks and learning systems, 2021.
  26. Y. Xu, X. Liu, T. Hu, B. Xin, and R. Yang, “Sparse black-box inversion attack with limited information,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2023, pp. 1–5.
  27. F. Samaria and A. Harter, “The orl database of faces,” AT&T Laboratories Cambridge, vol. 1, 1994.
  28. L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
  29. G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor, “Converting static image datasets to spiking neuromorphic datasets using saccades,” Frontiers in neuroscience, vol. 9, p. 437, 2015.
  30. A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza et al., “A low power, fully event-based gesture recognition system,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7243–7252.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.