Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them (2401.11723v2)
Abstract: The advent of the Internet of Things (IoT) has brought forth an era of unprecedented connectivity, with an estimated 80 billion smart devices expected to be in operation by the end of 2025. These devices facilitate a multitude of smart applications, enhancing the quality of life and efficiency across various domains. Machine Learning (ML) serves as a crucial technology, not only for analyzing IoT-generated data but also for diverse applications within the IoT ecosystem. For instance, ML finds utility in IoT device recognition, anomaly detection, and even in uncovering malicious activities. This paper embarks on a comprehensive exploration of the security threats arising from ML's integration into various facets of IoT, spanning various attack types including membership inference, adversarial evasion, reconstruction, property inference, model extraction, and poisoning attacks. Unlike previous studies, our work offers a holistic perspective, categorizing threats based on criteria such as adversary models, attack targets, and key security attributes (confidentiality, availability, and integrity). We delve into the underlying techniques of ML attacks in IoT environment, providing a critical evaluation of their mechanisms and impacts. Furthermore, our research thoroughly assesses 65 libraries, both author-contributed and third-party, evaluating their role in safeguarding model and data privacy. We emphasize the availability and usability of these libraries, aiming to arm the community with the necessary tools to bolster their defenses against the evolving threat landscape. Through our comprehensive review and analysis, this paper seeks to contribute to the ongoing discourse on ML-based IoT security, offering valuable insights and practical solutions to secure ML models and data in the rapidly expanding field of artificial intelligence in IoT.
- A. L. M. Neto, A. L. Souza, I. Cunha, M. Nogueira, I. O. Nunes, L. Cotta, N. Gentille, A. A. Loureiro, D. F. Aranha, H. K. Patil et al., “Aot: Authentication and access control for the entire iot device life-cycle,” in Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM, 2016, pp. 1–15.
- K. Rabah et al., “Convergence of ai, iot, big data and blockchain: a review,” The lake institute Journal, vol. 1, no. 1, pp. 1–18, 2018.
- L. Atzori, A. Iera, and G. Morabito, “The internet of things: A survey,” Computer networks, vol. 54, no. 15, pp. 2787–2805, 2010.
- D. Miorandi, S. Sicari, F. De Pellegrini, and I. Chlamtac, “Internet of things: Vision, applications and research challenges,” Ad hoc networks, vol. 10, no. 7, pp. 1497–1516, 2012.
- D. Bandyopadhyay and J. Sen, “Internet of things: Applications and challenges in technology and standardization,” Wireless personal communications, vol. 58, pp. 49–69, 2011.
- M. Antonakakis, T. April, M. Bailey, M. Bernhard, E. Bursztein, J. Cochran, Z. Durumeric, J. A. Halderman, L. Invernizzi, M. Kallitsis et al., “Understanding the mirai botnet,” in 26th USENIX security symposium (USENIX Security 17), 2017, pp. 1093–1110.
- K. Yang, Q. Li, and L. Sun, “Towards automatic fingerprinting of iot devices in the cyberspace,” Computer networks, vol. 148, pp. 318–327, 2019.
- J. Caballero, S. Venkataraman, P. Poosankam, M. G. Kang, D. Song, and A. Blum, “Fig: Automatic fingerprint generation,” 2007.
- H. Griffioen and C. Doerr, “Examining mirai’s battle over the internet of things,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020, pp. 743–756.
- O. Çetin, C. Ganán, L. Altena, T. Kasama, D. Inoue, K. Tamiya, Y. Tie, K. Yoshioka, and M. Van Eeten, “Cleaning up the internet of evil things: Real-world evidence on isp and consumer efforts to remove mirai.” in NDSS, 2019.
- E. Downing, Y. Mirsky, K. Park, and W. Lee, “{{\{{DeepReflect}}\}}: Discovering malicious functionality through binary reconstruction,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 3469–3486.
- O. Alrawi, M. Ike, M. Pruett, R. P. Kasturi, S. Barua, T. Hirani, B. Hill, and B. Saltaformaggio, “Forecasting malware capabilities from cyber attack memory images,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 3523–3540.
- Q. Wang, W. U. Hassan, D. Li, K. Jee, X. Yu, K. Zou, J. Rhee, Z. Chen, W. Cheng, C. A. Gunter et al., “You are what you do: Hunting stealthy malware via data provenance analysis.” in NDSS, 2020.
- A. Jamal, M. F. Hayat, and M. Nasir, “Malware detection and classification in iot network using ann,” Mehran University Research Journal Of Engineering & Technology, vol. 41, no. 1, pp. 80–91, 2022.
- T. Qiu, J. Chi, X. Zhou, Z. Ning, M. Atiquzzaman, and D. O. Wu, “Edge computing in industrial internet of things: Architecture, advances and challenges,” IEEE Communications Surveys & Tutorials, vol. 22, no. 4, pp. 2462–2488, 2020.
- N. Kato, B. Mao, F. Tang, Y. Kawamoto, and J. Liu, “Ten challenges in advancing machine learning technologies toward 6g,” IEEE Wireless Communications, vol. 27, no. 3, pp. 96–103, 2020.
- H. Kim and N. Feamster, “Improving network management with software defined networking,” IEEE Communications Magazine, vol. 51, no. 2, pp. 114–119, 2013.
- H. J. Kim, M. Y. Jung, W. S. Chin, and J. W. Jang, “Identifying service contexts for qos support in iot service oriented software defined networks,” in Mobile, Secure, and Programmable Networking: Third International Conference, MSPN 2017, Paris, France, June 29-30, 2017, Revised Selected Papers 3. Springer, 2017, pp. 99–108.
- D. Vukobratovic, D. Jakovetic, V. Skachek, D. Bajovic, D. Sejdinovic, G. K. Kurt, C. Hollanti, and I. Fischer, “Condense: A reconfigurable knowledge acquisition architecture for future 5g iot,” IEEE Access, vol. 4, pp. 3360–3378, 2016.
- L. J. Jagadeesan and V. Mendiratta, “Programming the network: Application software faults in software-defined networks,” in 2016 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2016, pp. 125–131.
- S. O. Uwagbole, W. J. Buchanan, and L. Fan, “An applied pattern-driven corpus to predictive analytics in mitigating sql injection attack,” in 2017 Seventh International Conference on Emerging Security Technologies (EST). IEEE, 2017, pp. 12–17.
- M. E. Ahmed, H. Kim, and M. Park, “Mitigating dns query-based ddos attacks with machine learning on software-defined networking,” in MILCOM 2017-2017 IEEE Military Communications Conference (MILCOM). IEEE, 2017, pp. 11–16.
- S. S. Bhunia and M. Gurusamy, “Dynamic attack detection and mitigation in iot using sdn,” in 2017 27th International telecommunication networks and applications conference (ITNAC). IEEE, 2017, pp. 1–6.
- R. Madeira and L. Nunes, “A machine learning approach for indirect human presence detection using iot devices,” in 2016 Eleventh International Conference on Digital Information Management (ICDIM). IEEE, 2016, pp. 145–150.
- J. Siryani, B. Tanju, and T. J. Eveleigh, “A machine learning decision-support system improves the internet of things’ smart meter operations,” IEEE Internet of Things Journal, vol. 4, no. 4, pp. 1056–1066, 2017.
- A. Walinjkar and J. Woods, “Ecg classification and prognostic approach towards personalized healthcare,” in 2017 International Conference On Social Media, Wearable And Web Analytics (Social Media). IEEE, 2017, pp. 1–8.
- H. H. Nguyen, F. Mirza, M. A. Naeem, and M. Nguyen, “A review on iot healthcare monitoring applications and a vision for transforming sensor data into real-time clinical feedback,” in 2017 IEEE 21st international conference on computer supported cooperative work in design (CSCWD). IEEE, 2017, pp. 257–262.
- X. Ling, J. Sheng, O. Baiocchi, X. Liu, and M. E. Tolentino, “Identifying parking spaces & detecting occupancy using vision-based iot devices,” in 2017 Global Internet of Things Summit (GIoTS). IEEE, 2017, pp. 1–6.
- W. Guo, T. Fukatsu, and S. Ninomiya, “Automated characterization of flowering dynamics in rice using field-acquired time-series rgb images,” Plant methods, vol. 11, pp. 1–15, 2015.
- H. Chen, H. Li, G. Dong, M. Hao, G. Xu, X. Huang, and Z. Liu, “Practical membership inference attack against collaborative inference in industrial iot,” IEEE Transactions on Industrial Informatics, vol. 18, no. 1, pp. 477–487, 2020.
- A. Abusnaina, A. Khormali, H. Alasmary, J. Park, A. Anwar, and A. Mohaisen, “Adversarial learning attacks on graph-based iot malware detection systems,” in 2019 IEEE 39th international conference on distributed computing systems (ICDCS). IEEE, 2019, pp. 1296–1305.
- M. Khosravy, K. Nakamura, N. Nitta, N. Dey, R. G. Crespo, E. Herrera-Viedma, and N. Babaguchi, “Social iot approach to cyber defense of a deep-learning-based recognition system in front of media clones generated by model inversion attack,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 5, pp. 2694–2704, 2022.
- X. Ma, B. Li, Q. Jiang, Y. Chen, S. Gao, and J. Ma, “Nosnoop: An effective collaborative meta-learning scheme against property inference attack,” IEEE Internet of Things Journal, vol. 9, no. 9, pp. 6778–6789, 2021.
- Q. Pan, J. Wu, A. K. Bashir, J. Li, and J. Wu, “Side-channel fuzzy analysis-based ai model extraction attack with information-theoretic perspective in intelligent iot,” IEEE Transactions on Fuzzy Systems, vol. 30, no. 11, pp. 4642–4656, 2022.
- Y. Liu, Y. Xie, and A. Srivastava, “Neural trojans,” in 2017 IEEE International Conference on Computer Design (ICCD). IEEE, 2017, pp. 45–48.
- N. Abosata, S. Al-Rubaye, G. Inalhan, and C. Emmanouilidis, “Internet of things for system integrity: A comprehensive survey on security, attacks and countermeasures for industrial applications,” Sensors, vol. 21, no. 11, p. 3654, 2021.
- J. Xu and W. Lee, “Sustaining availability of web services under distributed denial of service attacks,” IEEE Transactions on Computers, vol. 52, no. 2, pp. 195–208, 2003.
- M. Nawir, A. Amir, N. Yaakob, and O. B. Lynn, “Internet of things (iot): Taxonomy of security attacks,” in 2016 3rd international conference on electronic design (ICED). IEEE, 2016, pp. 321–326.
- Z. Wang, M. Taram, D. Moghimi, S. Swanson, D. Tullsen, and J. Zhao, “Nvleak: Off-chip side-channel attacks via non-volatile memory systems,” in 32th USENIX Security Symposium (USENIX Security 23), 2023.
- O. Kosut, L. Jia, R. J. Thomas, and L. Tong, “Malicious data attacks on smart grid state estimation: Attack strategies and countermeasures,” in 2010 first IEEE international conference on smart grid communications. IEEE, 2010, pp. 220–225.
- Z. Luo, S. Zhao, Z. Lu, Y. E. Sagduyu, and J. Xu, “Adversarial machine learning based partial-model attack in iot,” in Proceedings of the 2nd ACM workshop on wireless security and machine learning, 2020, pp. 13–18.
- A. Biryukov and A. Udovenko, “Attacks and countermeasures for white-box designs,” in Advances in Cryptology–ASIACRYPT 2018: 24th International Conference on the Theory and Application of Cryptology and Information Security, Brisbane, QLD, Australia, December 2–6, 2018, Proceedings, Part II 24. Springer, 2018, pp. 373–402.
- A. N. Bhagoji, W. He, B. Li, and D. Song, “Practical black-box attacks on deep neural networks using efficient query mechanisms,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 154–169.
- K. Sampigethaya and R. Poovendran, “A survey on mix networks and their secure applications,” Proceedings of the IEEE, vol. 94, no. 12, pp. 2142–2181, 2006.
- A. Serjantov and P. Sewell, “Passive attack analysis for connection-based anonymity systems,” in Computer Security–ESORICS 2003: 8th European Symposium on Research in Computer Security, Gjøvik, Norway, October 13-15, 2003. Proceedings 8. Springer, 2003, pp. 116–131.
- M. Bhardwaj, T. Xie, B. Boots, N. Jiang, and C.-A. Cheng, “Adversarial model for offline reinforcement learning,” arXiv preprint arXiv:2302.11048, 2023.
- M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of machine learning,” Machine Learning, vol. 81, pp. 121–148, 2010.
- B. Biggio, G. Fumera, and F. Roli, “Security evaluation of pattern classifiers under attack,” IEEE transactions on knowledge and data engineering, vol. 26, no. 4, pp. 984–996, 2013.
- L. Song and D. G. H. MA CG, “Machine learning security and privacy: a survey,” Chinese Journal of Network and Information Security, vol. 4, no. 8, pp. 1–11, 2018.
- N. Šrndić and P. Laskov, “Practical evasion of a learning-based classifier: A case study,” in 2014 IEEE symposium on security and privacy. IEEE, 2014, pp. 197–211.
- R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 3–18.
- Y. Long, V. Bindschaedler, L. Wang, D. Bu, X. Wang, H. Tang, C. A. Gunter, and K. Chen, “Understanding membership inferences on well-generalized learning models,” arXiv preprint arXiv:1802.04889, 2018.
- S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, “Privacy risk in machine learning: Analyzing the connection to overfitting,” in 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, 2018, pp. 268–282.
- M. A. Rahman, T. Rahman, R. Laganière, N. Mohammed, and Y. Wang, “Membership inference attack against differentially private deep learning model.” Trans. Data Priv., vol. 11, no. 1, pp. 61–79, 2018.
- L. Song, R. Shokri, and P. Mittal, “Privacy risks of securing machine learning models against adversarial examples,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 241–257.
- A. Sablayrolles, M. Douze, C. Schmid, Y. Ollivier, and H. Jégou, “White-box vs black-box: Bayes optimal strategies for membership inference,” in International Conference on Machine Learning. PMLR, 2019, pp. 5558–5567.
- Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in IEEE INFOCOM 2019-IEEE conference on computer communications. IEEE, 2019, pp. 2512–2520.
- M. Yaghini, B. Kulynych, G. Cherubin, and C. Troncoso, “Disparate vulnerability: On the unfairness of privacy attacks against machine learning,” arXiv e-prints, pp. arXiv–1906, 2019.
- S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei, “Demystifying membership inference attacks in machine learning as a service,” IEEE Transactions on Services Computing, vol. 14, no. 6, pp. 2073–2089, 2019.
- P. Irolla and G. Châtel, “Demystifying the membership inference attack,” in 2019 12th CMI Conference on Cybersecurity and Privacy (CMI). IEEE, 2019, pp. 1–7.
- L. Song, R. Shokri, and P. Mittal, “Membership inference attacks against adversarially robust deep learning models. in 2019 ieee security and privacy workshops (spw),” IEEE, 50ś56, 2019.
- S. Truex, L. Liu, M. E. Gursoy, W. Wei, and L. Yu, “Effects of differential privacy and data skewness on membership inference vulnerability,” in 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). IEEE, 2019, pp. 82–91.
- Y. Long, L. Wang, D. Bu, V. Bindschaedler, X. Wang, H. Tang, C. A. Gunter, and K. Chen, “A pragmatic approach to membership inferences on machine learning models,” in 2020 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2020, pp. 521–534.
- K. Leino and M. Fredrikson, “Stolen memories: Leveraging model memorization for calibrated {{\{{White-Box}}\}} membership inference,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1605–1622.
- T. Humphries, M. Rafuse, L. Tulloch, S. Oya, I. Goldberg, and F. Kerschbaum, “Differentially private learning does not bound membership inference,” arXiv preprint arXiv:2010.12112, 2020.
- R. Shokri, M. Strobel, and Y. Zick, “On the privacy risks of model explanations,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 231–241.
- M. Chen, Z. Zhang, T. Wang, M. Backes, M. Humbert, and Y. Zhang, “When machine unlearning jeopardizes privacy,” in Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, 2021, pp. 896–911.
- B. Hui, Y. Yang, H. Yuan, P. Burlina, N. Z. Gong, and Y. Cao, “Practical blind membership inference attack via differential comparisons,” arXiv preprint arXiv:2101.01341, 2021.
- L. Song and P. Mittal, “Systematic evaluation of privacy risks of machine learning models,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2615–2632.
- H. Hu, Z. Salcic, L. Sun, G. Dobbie, and X. Zhang, “Source inference attacks in federated learning,” in 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021, pp. 1102–1107.
- C. A. Choquette-Choo, F. Tramer, N. Carlini, and N. Papernot, “Label-only membership inference attacks,” in International conference on machine learning. PMLR, 2021, pp. 1964–1974.
- Y. Liu, Z. Zhao, M. Backes, and Y. Zhang, “Membership inference attacks by exploiting loss trajectory,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 2085–2098.
- N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer, “Membership inference attacks from first principles,” in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 1897–1914.
- J. Ye, A. Maddi, S. K. Murakonda, V. Bindschaedler, and R. Shokri, “Enhanced membership inference attacks against machine learning models,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 3093–3106.
- B. van Breugel, H. Sun, Z. Qian, and M. van der Schaar, “Membership inference attacks against synthetic data through overfitting detection,” arXiv preprint arXiv:2302.12580, 2023.
- W. Yuan, C. Yang, Q. V. H. Nguyen, L. Cui, T. He, and H. Yin, “Interaction-level membership inference attack against federated recommender systems,” arXiv preprint arXiv:2301.10964, 2023.
- T. Matsumoto, T. Miura, and N. Yanai, “Membership inference attacks against diffusion models,” arXiv preprint arXiv:2302.03262, 2023.
- M. Bertran, S. Tang, M. Kearns, J. Morgenstern, A. Roth, and Z. S. Wu, “Scalable membership inference attacks via quantile regression,” arXiv preprint arXiv:2307.03694, 2023.
- Y. Zhang, L. Zhao, and Q. Wang, “Mida: Membership inference attacks against domain adaptation,” ISA transactions, 2023.
- H. Hu and J. Pang, “Membership inference of diffusion models,” arXiv preprint arXiv:2301.09956, 2023.
- A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes, “Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models,” arXiv preprint arXiv:1806.01246, 2018.
- M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning,” in Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), 2018, pp. 1–15.
- M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318.
- M. Nasr, R. Shokri, and A. Houmansadr, “Machine learning with membership privacy using adversarial regularization,” in Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, 2018, pp. 634–646.
- J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong, “Memguard: Defending against black-box membership inference attacks via adversarial examples,” in Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, 2019, pp. 259–274.
- J. Li, N. Li, and B. Ribeiro, “Membership inference attacks and defenses in supervised learning via generalization gap,” arXiv preprint arXiv:2002.12062, vol. 3, no. 7, 2020.
- G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
- A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
- S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel, “Adversarial attacks on neural network policies,” arXiv preprint arXiv:1702.02284, 2017.
- N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp). Ieee, 2017, pp. 39–57.
- P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 15–26.
- N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security, 2017, pp. 506–519.
- W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks against black-box machine learning models,” arXiv preprint arXiv:1712.04248, 2017.
- D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2847–2856.
- Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.
- A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited queries and information,” in International conference on machine learning. PMLR, 2018, pp. 2137–2146.
- A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddings via graph poisoning,” in International Conference on Machine Learning. PMLR, 2019, pp. 695–704.
- C. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger, “Simple black-box adversarial attacks,” in International Conference on Machine Learning. PMLR, 2019, pp. 2484–2493.
- J. Su, D. V. Vargas, and K. Sakurai, “One pixel attack for fooling deep neural networks,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828–841, 2019.
- J. Chen, M. I. Jordan, and M. J. Wainwright, “Hopskipjumpattack: A query-efficient decision-based attack,” in 2020 ieee symposium on security and privacy (sp). IEEE, 2020, pp. 1277–1294.
- D. Zügner, O. Borchert, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on graph neural networks: Perturbations and their patterns,” ACM Transactions on Knowledge Discovery from Data (TKDD), vol. 14, no. 5, pp. 1–31, 2020.
- Y. Bin, X. Cao, X. Chen, Y. Ge, Y. Tai, C. Wang, J. Li, F. Huang, C. Gao, and N. Sang, “Adversarial semantic data augmentation for human pose estimation,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIX 16. Springer, 2020, pp. 606–622.
- X. Wang, X. He, J. Wang, and K. He, “Admix: Enhancing the transferability of adversarial attacks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 16 158–16 167.
- J. Rony, E. Granger, M. Pedersoli, and I. Ben Ayed, “Augmented lagrangian adversarial attacks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7738–7747.
- A. Amich and B. Eshete, “Eg-booster: explanation-guided booster of ml evasion attacks,” in Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy, 2022, pp. 16–28.
- X. Jia, Y. Zhang, B. Wu, K. Ma, J. Wang, and X. Cao, “Las-at: adversarial training with learnable attack strategy,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 398–13 408.
- T. Li, Y. Wu, S. Chen, K. Fang, and X. Huang, “Subspace adversarial training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 409–13 418.
- Y. Xiong, J. Lin, M. Zhang, J. E. Hopcroft, and K. He, “Stochastic variance reduced ensemble adversarial attack for boosting the adversarial transferability,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 983–14 992.
- Z. Song, Z. Zhang, K. Zhang, W. Luo, Z. Fan, W. Ren, and J. Lu, “Robust single image reflection removal against adversarial attacks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 24 688–24 698.
- J. Kim, B.-K. Lee, and Y. M. Ro, “Demystifying causal features on adversarial examples and causal inoculation for robust network by adversarial instrumental variable regression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 12 302–12 312.
- W. Feng, N. Xu, T. Zhang, and Y. Zhang, “Dynamic generative targeted attacks with pattern injection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 404–16 414.
- Z. Wang, H. Yang, Y. Feng, P. Sun, H. Guo, Z. Zhang, and K. Ren, “Towards transferable targeted adversarial examples,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 534–20 543.
- Y. Chen, J. Tian, X. Chen, and J. Zhou, “Effective ambiguity attack against passport-based dnn intellectual property protection schemes through fully connected layer substitution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8123–8132.
- H. Huang, Z. Chen, H. Chen, Y. Wang, and K. Zhang, “T-sea: Transfer-based self-ensemble attack on object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 514–20 523.
- C. WANG, C. LIU, Y. LI, S. Niu, and Y. Zhang, “Two-way and anonymous heterogeneous signcryption scheme between pki and ibc,” Journal on communications, vol. 38, no. 10, pp. 10–17, 2017.
- R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner, “Detecting adversarial samples from artifacts,” arXiv preprint arXiv:1703.00410, 2017.
- K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel, “On the (statistical) detection of adversarial examples,” arXiv preprint arXiv:1702.06280, 2017.
- X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, “Safety verification of deep neural networks,” in Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30. Springer, 2017, pp. 3–29.
- J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “On detecting adversarial perturbations,” arXiv preprint arXiv:1702.04267, 2017.
- W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
- J. Jin, A. Dundar, and E. Culurciello, “Robust convolutional neural networks under adversarial noise,” arXiv preprint arXiv:1511.06306, 2015.
- F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
- V. Zantedeschi, M.-I. Nicolae, and A. Rawat, “Efficient defenses against adversarial attacks,” in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 39–49.
- S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 4480–4488.
- Y. Nesterov and V. Spokoiny, “Random gradient-free minimization of convex functions,” Foundations of Computational Mathematics, vol. 17, pp. 527–566, 2017.
- X. Cao and N. Z. Gong, “Mitigating evasion attacks to deep neural networks via region-based classification,” in Proceedings of the 33rd Annual Computer Security Applications Conference, 2017, pp. 278–287.
- X. Wang, Z. Zhang, K. Tong, D. Gong, K. He, Z. Li, and W. Liu, “Triangle attack: A query-efficient decision-based adversarial attack,” in European Conference on Computer Vision. Springer, 2022, pp. 156–174.
- M. Cheng, T. Le, P.-Y. Chen, J. Yi, H. Zhang, and C.-J. Hsieh, “Query-efficient hard-label black-box attack: An optimization-based approach,” arXiv preprint arXiv:1807.04457, 2018.
- M. Cheng, S. Singh, P. Chen, P.-Y. Chen, S. Liu, and C.-J. Hsieh, “Sign-opt: A query-efficient hard-label adversarial attack,” arXiv preprint arXiv:1909.10773, 2019.
- H. Li, X. Xu, X. Zhang, S. Yang, and B. Li, “Qeba: Query-efficient boundary-based blackbox attack,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1221–1230.
- A. Rahmati, S.-M. Moosavi-Dezfooli, P. Frossard, and H. Dai, “Geoda: a geometric framework for black-box adversarial attacks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8446–8455.
- T. Maho, T. Furon, and E. Le Merrer, “Surfree: a fast surrogate-free black-box attack,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 430–10 439.
- J. Du, H. Zhang, J. T. Zhou, Y. Yang, and J. Feng, “Query-efficient meta attack to deep neural networks,” arXiv preprint arXiv:1906.02398, 2019.
- J. Wang, “Adversarial examples in physical world.” in IJCAI, 2021, pp. 4925–4926.
- J. Rauber, R. Zimmermann, M. Bethge, and W. Brendel, “Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax,” Journal of Open Source Software, vol. 5, no. 53, p. 2607, 2020.
- T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii, “Distributional smoothing with virtual adversarial training,” arXiv preprint arXiv:1507.00677, 2015.
- L. Pajola and M. Conti, “Fall of giants: How popular text-based mlaas fall against a simple evasion attack,” in 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021, pp. 198–211.
- M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1322–1333.
- M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An {{\{{End-to-End}}\}} case study of personalized warfarin dosing,” in 23rd USENIX security symposium (USENIX Security 14), 2014, pp. 17–32.
- X. Wu, M. Fredrikson, S. Jha, and J. F. Naughton, “A methodology for formalizing model-inversion attacks,” in 2016 IEEE 29th Computer Security Foundations Symposium (CSF). IEEE, 2016, pp. 355–370.
- B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 603–618.
- C. Song, T. Ristenpart, and V. Shmatikov, “Machine learning models that remember too much,” in Proceedings of the 2017 ACM SIGSAC Conference on computer and communications security, 2017, pp. 587–601.
- S. Hidano, T. Murakami, S. Katsumata, S. Kiyomoto, and G. Hanaoka, “Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes,” in 2017 15th Annual Conference on Privacy, Security and Trust (PST). IEEE, 2017, pp. 115–11 509.
- A. Sannai, “Reconstruction of training samples from loss functions,” arXiv preprint arXiv:1805.07337, 2018.
- M.-S. Lacharité, B. Minaud, and K. G. Paterson, “Improved reconstruction attacks on encrypted data using range query leakage,” in 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 2018, pp. 297–314.
- N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in 28th USENIX Security Symposium (USENIX Security 19), 2019, pp. 267–284.
- Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 148–162.
- Z. Yang, J. Zhang, E.-C. Chang, and Z. Liang, “Neural network inversion in adversarial setting via background knowledge alignment,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 225–240.
- L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
- U. Aïvodji, S. Gambs, and T. Ther, “Gamin: An adversarial approach to black-box model inversion,” arXiv preprint arXiv:1909.11835, 2019.
- H. Oh and Y. Lee, “Exploring image reconstruction attack in deep learning computation offloading,” in The 3rd International Workshop on Deep Learning for Mobile Systems and Applications, 2019, pp. 19–24.
- A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang, “{{\{{Updates-Leak}}\}}: Data set inference and reconstruction attacks in online learning,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1291–1308.
- Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: Generative model-inversion attacks against deep neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 253–261.
- Q. Xu, M. T. Arafin, and G. Qu, “Midas: model inversion defenses using an approximate memory system,” in 2020 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). IEEE, 2020, pp. 1–4.
- A. Goldsteen, G. Ezov, and A. Farkash, “Reducing risk of model inversion using privacy-guided training,” arXiv preprint arXiv:2006.15877, 2020.
- J. Zhu and M. Blaschko, “R-gap: Recursive gradient attack on privacy,” arXiv preprint arXiv:2010.07733, 2020.
- K.-C. Wang, Y. Fu, K. Li, A. Khisti, R. Zemel, and A. Makhzani, “Variational model inversion attacks,” Advances in Neural Information Processing Systems, vol. 34, pp. 9706–9719, 2021.
- X. Zhao, W. Zhang, X. Xiao, and B. Lim, “Exploiting explanations for model inversion attacks,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 682–692.
- X. Dong, H. Yin, J. M. Alvarez, J. Kautz, P. Molchanov, and H. Kung, “Privacy vulnerability of split computing to data-free model inversion attacks,” arXiv preprint arXiv:2107.06304, 2021.
- H. Fereidooni, S. Marchal, M. Miettinen, A. Mirhoseini, H. Möllering, T. D. Nguyen, P. Rieger, A.-R. Sadeghi, T. Schneider, H. Yalame et al., “Safelearn: Secure aggregation for private federated learning,” in 2021 IEEE Security and Privacy Workshops (SPW). IEEE, 2021, pp. 56–62.
- L. Struppek, D. Hintersdorf, A. D. A. Correia, A. Adler, and K. Kersting, “Plug & play attacks: Towards robust and flexible model inversion attacks,” arXiv preprint arXiv:2201.12179, 2022.
- B. Balle, G. Cherubin, and J. Hayes, “Reconstructing training data with informed adversaries,” in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 1138–1156.
- M. Kahla, S. Chen, H. A. Just, and R. Jia, “Label-only model inversion attacks via boundary repulsion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15 045–15 053.
- G. Han, J. Choi, H. Lee, and J. Kim, “Reinforcement learning-based black-box model inversion attacks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 504–20 513.
- M. S. M. S. Annamalai, A. Gadotti, and L. Rocher, “A linear reconstruction approach for attribute inference attacks against synthetic data,” arXiv preprint arXiv:2301.10053, 2023.
- Y. Long, Z. Ying, H. Yan, R. Fang, X. Li, Y. Wang, and Z. Pan, “Membership reconstruction attack in deep neural networks,” Information Sciences, vol. 634, pp. 27–41, 2023.
- Y. Lin, S. Han, H. Mao, Y. Wang, and W. J. Dally, “Deep gradient compression: Reducing the communication bandwidth for distributed training,” arXiv preprint arXiv:1712.01887, 2017.
- Y. Tsuzuku, H. Imachi, and T. Akiba, “Variance-based gradient compression for efficient distributed deep learning,” arXiv preprint arXiv:1802.06058, 2018.
- K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data,” arXiv preprint arXiv:1611.04482, 2016.
- Y. Aono, T. Hayashi, L. Wang, S. Moriai et al., “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE transactions on information forensics and security, vol. 13, no. 5, pp. 1333–1345, 2017.
- E. Erdoğan, A. Küpçü, and A. E. Çiçek, “Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning,” in Proceedings of the 21st Workshop on Privacy in the Electronic Society, 2022, pp. 115–124.
- B. Zhao, K. R. Mopuri, and H. Bilen, “idlg: Improved deep leakage from gradients,” arXiv preprint arXiv:2001.02610, 2020.
- L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in 2019 IEEE symposium on security and privacy (SP). IEEE, 2019, pp. 691–706.
- G. Ateniese, L. V. Mancini, A. Spognardi, A. Villani, D. Vitali, and G. Felici, “Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers,” International Journal of Security and Networks, vol. 10, no. 3, pp. 137–150, 2015.
- K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, 2018, pp. 619–633.
- C. Song and V. Shmatikov, “Overlearning reveals sensitive attributes,” arXiv preprint arXiv:1905.11742, 2019.
- M. Xu and X. Li, “Subject property inference attack in collaborative learning,” in 2020 12th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), vol. 1. IEEE, 2020, pp. 227–231.
- M. P. Parisot, B. Pejo, and D. Spagnuelo, “Property inference attacks on convolutional neural networks: Influence and implications of target model’s complexity,” arXiv preprint arXiv:2104.13061, 2021.
- M. Malekzadeh, A. Borovykh, and D. Gündüz, “Honest-but-curious nets: Sensitive attributes of private inputs can be secretly coded into the classifiers’ outputs,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 825–844.
- J. Zhou, Y. Chen, C. Shen, and Y. Zhang, “Property inference attacks against gans,” arXiv preprint arXiv:2111.07608, 2021.
- W. Zhang, S. Tople, and O. Ohrimenko, “Leakage of dataset properties in {{\{{Multi-Party}}\}} machine learning,” in 30th USENIX security symposium (USENIX Security 21), 2021, pp. 2687–2704.
- M. Chase, E. Ghosh, and S. Mahloujifar, “Property inference from poisoning,” arXiv preprint arXiv:2101.11073, 2021.
- X. Wang and W. H. Wang, “Group property inference attacks against graph neural networks,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 2871–2884.
- Z. Wang, Y. Huang, M. Song, L. Wu, F. Xue, and K. Ren, “Poisoning-assisted property inference attack against federated learning,” IEEE Transactions on Dependable and Secure Computing, 2022.
- S. Mahloujifar, E. Ghosh, and M. Chase, “Property inference from poisoning,” in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 1120–1137.
- H. Hu and J. Pang, “Prisampler: Mitigating property inference of diffusion models,” arXiv preprint arXiv:2306.05208, 2023.
- R. Kerkouche, G. Ács, and M. Fritz, “Client-specific property inference against secure aggregation in federated learning,” arXiv preprint arXiv:2303.03908, 2023.
- J. Erman, A. Mahanti, M. Arlitt, I. Cohen, and C. Williamson, “Offline/realtime traffic classification using semi-supervised learning,” Performance Evaluation, vol. 64, no. 9-12, pp. 1194–1213, 2007.
- A. L. Tarca, V. J. Carey, X.-w. Chen, R. Romero, and S. Drăghici, “Machine learning and its applications to biology,” PLoS computational biology, vol. 3, no. 6, p. e116, 2007.
- T. T. Nguyen and G. Armitage, “A survey of techniques for internet traffic classification using machine learning,” IEEE communications surveys & tutorials, vol. 10, no. 4, pp. 56–76, 2008.
- E. Cohen and C. Lund, “Packet classification in large isps: Design and evaluation of decision tree classifiers,” ACM SIGMETRICS Performance Evaluation Review, vol. 33, no. 1, pp. 73–84, 2005.
- R. Diao, K. Sun, V. Vittal, R. J. O’Keefe, M. R. Richardson, N. Bhatt, D. Stradford, and S. K. Sarawgi, “Decision tree-based online voltage security assessment using pmu measurements,” IEEE Transactions on Power systems, vol. 24, no. 2, pp. 832–839, 2009.
- R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1310–1321.
- H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private recurrent language models,” arXiv preprint arXiv:1710.06963, 2017.
- R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client level perspective,” arXiv preprint arXiv:1712.07557, 2017.
- B. Knott, S. Venkataraman, A. Hannun, S. Sengupta, M. Ibrahim, and L. van der Maaten, “Crypten: Secure multi-party computation meets machine learning,” Advances in Neural Information Processing Systems, vol. 34, pp. 4961–4973, 2021.
- C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
- G. Cormode, “Personal privacy vs population privacy: learning to attack anonymization,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp. 1253–1261.
- A. Suri and D. Evans, “Formalizing and estimating distribution inference risks,” arXiv preprint arXiv:2109.06024, 2021.
- V. Hartmann, L. Meynent, M. Peyrard, D. Dimitriadis, S. Tople, and R. West, “Distribution inference risks: Identifying and mitigating sources of leakage,” in 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023, pp. 136–149.
- A. Suri, Y. Lu, Y. Chen, and D. Evans, “Dissecting distribution inference,” in 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023, pp. 150–164.
- D. Pasquini, G. Ateniese, and M. Bernaschi, “Unleashing the tiger: Inference attacks on split learning,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2113–2129.
- F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction {{\{{APIs}}\}},” in 25th USENIX security symposium (USENIX Security 16), 2016, pp. 601–618.
- B. Wang and N. Z. Gong, “Stealing hyperparameters in machine learning,” in 2018 IEEE symposium on security and privacy (SP). IEEE, 2018, pp. 36–52.
- J. R. Correia-Silva, R. F. Berriel, C. Badue, A. F. de Souza, and T. Oliveira-Santos, “Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data,” in 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018, pp. 1–8.
- S. Hong, M. Davinroy, Y. Kaya, S. N. Locke, I. Rackow, K. Kulda, D. Dachman-Soled, and T. Dumitraş, “Security analysis of deep neural networks operating in the presence of cache side-channel attacks,” arXiv preprint arXiv:1810.03487, 2018.
- S. J. Oh, B. Schiele, and M. Fritz, “Towards reverse-engineering black-box neural networks,” Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 121–144, 2019.
- T. Orekondy, B. Schiele, and M. Fritz, “Knockoff nets: Stealing functionality of black-box models,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4954–4963.
- K. Yoshida, T. Kubota, M. Shiozaki, and T. Fujino, “Model-extraction attack against fpga-dnn accelerator utilizing correlation electromagnetic analysis,” in 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 2019, pp. 318–318.
- L. Batina, S. Bhasin, D. Jap, and S. Picek, “{{\{{CSI}}\}}{{\{{NN}}\}}: Reverse engineering of neural network architectures through electromagnetic side channel,” in 28th USENIX Security Symposium (USENIX Security 19), 2019, pp. 515–532.
- M. Juuti, S. Szyller, S. Marchal, and N. Asokan, “Prada: protecting against dnn model stealing attacks,” in 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2019, pp. 512–527.
- R. N. Reith, T. Schneider, and O. Tkachenko, “Efficiently stealing your machine learning models,” in Proceedings of the 18th ACM Workshop on Privacy in the Electronic Society, 2019, pp. 198–210.
- K. Krishna, G. S. Tomar, A. P. Parikh, N. Papernot, and M. Iyyer, “Thieves on sesame street! model extraction of bert-based apis,” arXiv preprint arXiv:1910.12366, 2019.
- M. Jagielski, N. Carlini, D. Berthelot, A. Kurakin, and N. Papernot, “High accuracy and high fidelity extraction of neural networks,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1345–1362.
- N. Carlini, M. Jagielski, and I. Mironov, “Cryptanalytic extraction of neural network models,” in Annual International Cryptology Conference. Springer, 2020, pp. 189–218.
- V. Chandrasekaran, K. Chaudhuri, I. Giacomelli, S. Jha, and S. Yan, “Exploring connections between active learning and model extraction,” in 29th USENIX Security Symposium (USENIX Security 20), 2020, pp. 1309–1326.
- U. Aïvodji, A. Bolot, and S. Gambs, “Model extraction from counterfactual explanations,” arXiv preprint arXiv:2009.01884, 2020.
- T. Fukuoka, Y. Yamaoka, and T. Terada, “Model extraction oriented data publishing with k-anonymity,” in International Workshop on Security. Springer, 2020, pp. 218–236.
- Y. Zhu, Y. Cheng, H. Zhou, and Y. Lu, “Hermes attack: Steal {{\{{DNN}}\}} models with lossless inference accuracy,” in 30th USENIX Security Symposium (USENIX Security 21), 2021.
- T. Miura, S. Hasegawa, and T. Shibahara, “Megex: Data-free model extraction attack against gradient-based explainable ai,” arXiv preprint arXiv:2107.08909, 2021.
- H. Hu and J. Pang, “Model extraction and defenses on generative adversarial networks,” arXiv preprint arXiv:2101.02069, 2021.
- X. Gong, Y. Chen, W. Yang, G. Mei, and Q. Wang, “Inversenet: Augmenting model extraction attacks with training data inversion.” in IJCAI, 2021, pp. 2439–2447.
- N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson et al., “Extracting training data from large language models,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2633–2650.
- Z. Zhang, Y. Chen, and D. Wagner, “Seat: similarity encoder by adversarial training for detecting model extraction attack queries,” in Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021, pp. 37–48.
- J.-B. Truong, P. Maini, R. J. Walls, and N. Papernot, “Data-free model extraction,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 4771–4780.
- B. Wu, X. Yang, S. Pan, and X. Yuan, “Model extraction attacks on graph neural networks: Taxonomy and realisation,” in Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, 2022, pp. 337–350.
- A. Dziedzic, H. Duan, M. A. Kaleem, N. Dhawan, J. Guan, Y. Cattan, F. Boenisch, and N. Papernot, “Dataset inference for self-supervised models,” Advances in Neural Information Processing Systems, vol. 35, pp. 12 058–12 070, 2022.
- A. Dziedzic, M. A. Kaleem, Y. S. Lu, and N. Papernot, “Increasing the cost of model extraction with calibrated proof of work,” arXiv preprint arXiv:2201.09243, 2022.
- Y. Liu, J. Jia, H. Liu, and N. Z. Gong, “Stolenencoder: stealing pre-trained encoders in self-supervised learning,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 2115–2128.
- Y. Chen, R. Guan, X. Gong, J. Dong, and M. Xue, “D-dae: Defense-penetrating model extraction attacks,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 382–399.
- C.-C. Chang, J. Pan, Z. Xie, J. Hu, and Y. Chen, “Rethink before releasing your model: Ml model extraction attack in eda,” in Proceedings of the 28th Asia and South Pacific Design Automation Conference, 2023, pp. 252–257.
- C. Dwork, “Differential privacy,” in International colloquium on automata, languages, and programming. Springer, 2006, pp. 1–12.
- N. Li, W. Qardaji, D. Su, Y. Wu, and W. Yang, “Membership privacy: A unifying framework for privacy definitions,” in Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, 2013, pp. 889–900.
- T. Lee, B. Edwards, I. Molloy, and D. Su, “Defending against neural network model stealing attacks using deceptive perturbations,” in 2019 IEEE Security and Privacy Workshops (SPW). IEEE, 2019, pp. 43–49.
- D. Meng and H. Chen, “Magnet: a two-pronged defense against adversarial examples,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 135–147.
- N. Veyrat-Charvillon, M. Medwed, S. Kerckhof, and F.-X. Standaert, “Shuffling against side-channel attacks: A comprehensive study with cautionary note,” in Advances in Cryptology–ASIACRYPT 2012: 18th International Conference on the Theory and Application of Cryptology and Information Security, Beijing, China, December 2-6, 2012. Proceedings 18. Springer, 2012, pp. 740–757.
- A. Al Hasib and A. A. M. M. Haque, “A comparative study of the performance and security issues of aes and rsa cryptography,” in 2008 third international conference on convergence and hybrid information technology, vol. 2. IEEE, 2008, pp. 505–510.
- A. Barbalau, A. Cosma, R. T. Ionescu, and M. Popescu, “Black-box ripper: Copying black-box models using generative evolutionary algorithms,” Advances in Neural Information Processing Systems, vol. 33, pp. 20 120–20 129, 2020.
- A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” Advances in neural information processing systems, vol. 31, 2018.
- B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” arXiv preprint arXiv:1206.6389, 2012.
- S. Alfeld, X. Zhu, and P. Barford, “Data poisoning attacks against autoregressive models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
- C. Yang, Q. Wu, H. Li, and Y. Chen, “Generative poisoning attack method against neural networks,” arXiv preprint arXiv:1703.01340, 2017.
- X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.
- M. Fang, G. Yang, N. Z. Gong, and J. Liu, “Poisoning attacks to graph-based recommender systems,” in Proceedings of the 34th annual computer security applications conference, 2018, pp. 381–392.
- M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE symposium on security and privacy (SP). IEEE, 2018, pp. 19–35.
- A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in International Conference on Machine Learning. PMLR, 2019, pp. 634–643.
- V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu, “Data poisoning attacks against federated learning systems,” in Computer Security–ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020, Guildford, UK, September 14–18, 2020, Proceedings, Part I 25. Springer, 2020, pp. 480–501.
- K. Kurita, P. Michel, and G. Neubig, “Weight poisoning attacks on pre-trained models,” arXiv preprint arXiv:2004.06660, 2020.
- H. Zhang, C. Tian, Y. Li, L. Su, N. Yang, W. X. Zhao, and J. Gao, “Data poisoning attack against recommender system using incomplete and perturbed data,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 2154–2164.
- M. Fang, X. Cao, J. Jia, and N. Gong, “Local model poisoning attacks to {{\{{Byzantine-Robust}}\}} federated learning,” in 29th USENIX security symposium (USENIX Security 20), 2020, pp. 1605–1622.
- H. Huang, J. Mu, N. Z. Gong, Q. Li, B. Liu, and M. Xu, “Data poisoning attacks to deep learning based recommender systems,” arXiv preprint arXiv:2101.02644, 2021.
- V. Shejwalkar and A. Houmansadr, “Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning,” in NDSS, 2021.
- D. Rong, S. Ye, R. Zhao, H. N. Yuen, J. Chen, and Q. He, “Fedrecattack: model poisoning attack to federated recommendation,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022, pp. 2643–2655.
- F. A. Yerlikaya and Ş. Bahtiyar, “Data poisoning attacks against machine learning algorithms,” Expert Systems with Applications, vol. 208, p. 118101, 2022.
- X. Cao and N. Z. Gong, “Mpaf: Model poisoning attacks to federated learning based on fake clients,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3396–3404.
- Z. Yang, X. He, Z. Li, M. Backes, M. Humbert, P. Berrang, and Y. Zhang, “Data poisoning attacks against multimodal encoders,” in International Conference on Machine Learning. PMLR, 2023, pp. 39 299–39 313.
- P. Gupta, K. Yadav, B. B. Gupta, M. Alazab, and T. R. Gadekallu, “A novel data poisoning attack in federated learning based on inverted loss function,” Computers & Security, vol. 130, p. 103270, 2023.
- X. Cao, J. Jia, Z. Zhang, and N. Z. Gong, “Fedrecover: Recovering from poisoning attacks in federated learning using historical information,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023, pp. 1366–1383.
- M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
- P. J. Huber, “Robust estimation of a location parameter,” in Breakthroughs in statistics: Methodology and distribution. Springer, 1992, pp. 492–518.
- Chao Liu (358 papers)
- Boxi Chen (1 paper)
- Wei Shao (95 papers)
- Chris Zhang (6 papers)
- Kelvin Wong (19 papers)
- Yi Zhang (994 papers)