Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Red Teaming Framework for Securing AI in Maritime Autonomous Systems (2312.11500v1)

Published 8 Dec 2023 in cs.CR and cs.AI

Abstract: AI is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (98)
  1. Al-Dujaili, Abdullah, Alex Huang, Erik Hemberg, and Una-May O’Reilly. 2018. “Adversarial deep learning for robust detection of binary encoded malware.” In 2018 IEEE Security and Privacy Workshops (SPW), 76–82. IEEE.
  2. Anderson, Mark. 2019. “Bon voyage for the autonomous ship Mayflower.” IEEE Spectrum 57 (1): 36–39.
  3. Askari, Humayun Rashid, and Mohammad Nazir Hossain. 2022. “Towards utilising autonomous ships: A viable advance in industry 4.0.” Journal of International Maritime Safety, Environmental Affairs, and Shipping 6 (1): 39–49.
  4. Barreno, Marco, Blaine Nelson, Russell Sears, Anthony D Joseph, and J Doug Tygar. 2006. “Can machine learning be secure?” In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 16–25.
  5. Barrett, A. 2023. “Design and Assessment of a Low-Cost Autonomous Control system to Mitigate Effects of Communication Dropouts in Uncrewed Surface Vessels.” Unpublished .
  6. Bhagoji, Arjun Nitin, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. “Analyzing federated learning through an adversarial lens.” In International Conference on Machine Learning, 634–643. PMLR.
  7. Biggio, Battista, Igino Corona, Blaine Nelson, Benjamin IP Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli. 2014. “Security evaluation of support vector machines in adversarial environments.” Support Vector Machines Applications 105–153.
  8. Biggio, Battista, Giorgio Fumera, and Fabio Roli. 2013. “Security evaluation of pattern classifiers under attack.” IEEE transactions on knowledge and data engineering 26 (4): 984–996.
  9. Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning attacks against support vector machines.” arXiv preprint arXiv:1206.6389 .
  10. Cao, Di, Shan Chang, Zhijian Lin, Guohua Liu, and Donghong Sun. 2019. “Understanding distributed poisoning attack in federated learning.” In 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), 233–239. IEEE.
  11. Caroline, B, B Christian, B Stephan, B Luis, D Giuseppe, E Damiani, H Sven, et al. 2021. “Securing machine learning algorithms.” .
  12. Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. “Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission.” In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, 1721–1730.
  13. Cavallaro, Lorenzo, and Emiliano De Cristofaro. 2023. “Security and Privacy of AI Knowledge Guide Issue 1.0. 0.” .
  14. Chan, Shih-Han, Yinpeng Dong, Jun Zhu, Xiaolu Zhang, and Jun Zhou. 2022. “Baddet: Backdoor attacks on object detection.” In European Conference on Computer Vision, 396–412. Springer.
  15. Chen, Jinyin, Shulong Hu, Haibin Zheng, Changyou Xing, and Guomin Zhang. 2023. “GAIL-PT: An intelligent penetration testing framework with generative adversarial imitation learning.” Computers & Security 126: 103055.
  16. Chen, Jinyin, Mengmeng Su, Shijing Shen, Hui Xiong, and Haibin Zheng. 2019. “POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm.” Computers & Security 85: 89–106.
  17. Chen, Kangjie, Xiaoxuan Lou, Guowen Xu, Jiwei Li, and Tianwei Zhang. 2022. “Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only.” In The Eleventh International Conference on Learning Representations, .
  18. Chen, Xinyun, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. “Targeted backdoor attacks on deep learning systems using data poisoning.” arXiv preprint arXiv:1712.05526 .
  19. Cheng, Yize, Wenbin Hu, and Minhao Cheng. 2023. “Attacking by Aligning: Clean-Label Backdoor Attacks on Object Detection.” .
  20. Cinà, Antonio Emanuele, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, and Fabio Roli. 2023. “Wild patterns reloaded: A survey of machine learning security against training data poisoning.” ACM Computing Surveys 55 (13s): 1–39.
  21. Corera, Gordon. 2023. “AI must have better security, says top cyber official — bbc.co.uk.” https://www.bbc.co.uk/news/technology-66166824. [Accessed 20-09-2023].
  22. Dang, Hao, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil K Jain. 2020. “On the detection of digital face manipulation.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition, 5781–5790.
  23. Fabian, Daniel. 2023. “Google’s AI Red Team: the ethical hackers making AI safer — blog.google.” https://blog.google/technology/safety-security/googles-ai-red-team-the-ethical-hackers-making-ai-safer/. [Accessed 20-09-2023].
  24. Fan, Cunlong, Krzysztof Wróbel, Jakub Montewka, Mateusz Gil, Chengpeng Wan, and Di Zhang. 2020. “A framework to identify factors influencing navigational risk for Maritime Autonomous Surface Ships.” Ocean Engineering 202: 107188.
  25. Felski, Andrzej, and Karolina Zwolak. 2020. “The ocean-going autonomous ship—Challenges and threats.” Journal of Marine Science and Engineering 8 (1): 41.
  26. Frederickson, Christopher, Michael Moore, Glenn Dawson, and Robi Polikar. 2018. “Attack strength vs. detectability dilemma in adversarial machine learning.” In 2018 international joint conference on neural networks (IJCNN), 1–8. IEEE.
  27. Fritchman, Kyle, Keerthanaa Saminathan, Rafael Dowsley, Tyler Hughes, Martine De Cock, Anderson Nascimento, and Ankur Teredesai. 2018. “Privacy-preserving scoring of tree ensembles: A novel framework for AI in healthcare.” In 2018 IEEE international conference on big data (Big Data), 2413–2422. Ieee.
  28. Ge, Suyu, Chunting Zhou, Rui Hou, Madian Khabsa, Yi-Chia Wang, Qifan Wang, Jiawei Han, and Yuning Mao. 2023. “MART: Improving LLM Safety with Multi-round Automatic Red-Teaming.” arXiv preprint arXiv:2311.07689 .
  29. Ghanem, Mohamed C, and Thomas M Chen. 2019. “Reinforcement learning for efficient network penetration testing.” Information 11 (1): 6.
  30. Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and harnessing adversarial examples.” arXiv preprint arXiv:1412.6572 .
  31. Grosse, Kathrin, Lukas Bieringer, Tarek R Besold, Battista Biggio, and Katharina Krombholz. 2023. “Machine learning security in industry: A quantitative survey.” IEEE Transactions on Information Forensics and Security 18: 1749–1762.
  32. Gu, Tianyu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. “Badnets: Evaluating backdooring attacks on deep neural networks.” IEEE Access 7: 47230–47244.
  33. Holliday, Shelby. 2023. “Drone Boats: Inside the U.S. Navy’s Latest Unmanned AI Tech — wsj.com.” https://www.wsj.com/video/series/shelby-holliday/drone-boats-inside-the-us-navys-latest-unmanned-ai-tech/8BDE83A8-DFFC-49E8-9681-2BEEA9F5541F. [Accessed 04-12-2023].
  34. Hossain, Khondoker Murad, and Tim Oates. 2022. “Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks.” arXiv preprint arXiv:2212.08121 .
  35. Jean-Baptiste, Garry. 2023. “Safetensors audited as really safe and becoming the default — blog.eleuther.ai.” https://blog.eleuther.ai/safetensors-security-audit/. [Accessed 12-11-2023].
  36. Jia, Jinyuan, and Neil Zhenqiang Gong. 2020. “Defending against machine learning based inference attacks via adversarial examples: Opportunities and challenges.” Adaptive autonomous secure cyber systems 23–40.
  37. Jing, Huiyun, Wei Wei, Chuan Zhou, and Xin He. 2021. “An Artificial Intelligence Security Framework.” In Journal of Physics: Conference Series, Vol. 1948, 012004. IOP Publishing.
  38. Jocher, Glenn, Ayush Chaurasia, and Jing Qiu. 2023. “YOLO by Ultralytics.” Jan. https://github.com/ultralytics/ultralytics.
  39. Juuti, Mika, Sebastian Szyller, Samuel Marchal, and N Asokan. 2019. “PRADA: protecting against DNN model stealing attacks.” In 2019 IEEE European Symposium on Security and Privacy (EuroS&P), 512–527. IEEE.
  40. Kaluza, Pablo, Andrea Kölzsch, Michael T Gastner, and Bernd Blasius. 2010. “The complex network of global cargo ship movements.” Journal of the Royal Society Interface 7 (48): 1093–1103.
  41. Khowaja, Sunder Ali, Kapal Dev, Nawab Muhammad Faseeh Qureshi, Parus Khuwaja, and Luca Foschini. 2022. “Toward industrial private AI: A two-tier framework for data and model security.” IEEE Wireless Communications 29 (2): 76–83.
  42. Kim, Jin Sob, Hyun Joon Park, Wooseok Shin, and Sung Won Han. 2023. “AD-YOLO: You Look Only Once in Training Multiple Sound Event Localization and Detection.” In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE.
  43. Kodheli, Oltjon, Eva Lagunas, Nicola Maturo, Shree Krishna Sharma, Bhavani Shankar, Jesus Fabian Mendoza Montoya, Juan Carlos Merlano Duncan, et al. 2020. “Satellite communications in the new space era: A survey and future challenges.” IEEE Communications Surveys & Tutorials 23 (1): 70–109.
  44. Kolosnjaji, Bojan, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, and Fabio Roli. 2018. “Adversarial malware binaries: Evading deep learning for malware detection in executables.” In 2018 26th European signal processing conference (EUSIPCO), 533–537. IEEE.
  45. Kong, Zixiao, Jingfeng Xue, Yong Wang, Lu Huang, Zequn Niu, and Feng Li. 2021. “A survey on adversarial attack in the age of artificial intelligence.” Wireless Communications and Mobile Computing 2021.
  46. Kretschmann, Lutz, Hans-Christoph Burmeister, and Carlos Jahn. 2017. “Analyzing the economic benefit of unmanned autonomous ships: An exploratory cost-comparison between an autonomous and a conventional bulk carrier.” Research in transportation business & management 25: 76–86.
  47. Lee, Changui, and Seojeong Lee. 2023. “Vulnerability of Clean-Label Poisoning Attack for Object Detection in Maritime Autonomous Surface Ships.” Journal of Marine Science and Engineering 11 (6): 1179.
  48. Lee, Jade Man-yin, and Eugene Yin-cheung Wong. 2021. “Suez Canal blockage: an analysis of legal impact, risks and liabilities to the global supply chain.” In MATEC web of conferences, Vol. 339, 01019. EDP Sciences.
  49. Li, Deqiang, Qianmu Li, Yanfang Ye, and Shouhuai Xu. 2021. “Arms race in adversarial malware detection: A survey.” ACM Computing Surveys (CSUR) 55 (1): 1–35.
  50. Liao, Fangzhou, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. 2018. “Defense against adversarial attacks using high-level representation guided denoiser.” In Proceedings of the IEEE conference on computer vision and pattern recognition, 1778–1787.
  51. Liu, Jiang, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. 2022. “Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14973–14982.
  52. Liu, Xin, Huanrui Yang, Ziwei Liu, Linghao Song, Hai Li, and Yiran Chen. 2018. “Dpatch: An adversarial patch attack on object detectors.” arXiv preprint arXiv:1806.02299 .
  53. Maiorca, Davide, Battista Biggio, and Giorgio Giacinto. 2019. “Towards adversarial malware detection: Lessons learned from PDF-based attacks.” ACM Computing Surveys (CSUR) 52 (4): 1–36.
  54. Mei, Shike, and Xiaojin Zhu. 2015. “Using machine teaching to identify optimal training-set attacks on machine learners.” In Proceedings of the aaai conference on artificial intelligence, Vol. 29.
  55. Mirsky, Yisroel, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, et al. 2022. “The threat of offensive ai to organizations.” Computers & Security 103006.
  56. Mirsky, Yisroel, and Wenke Lee. 2021. “The creation and detection of deepfakes: A survey.” ACM Computing Surveys (CSUR) 54 (1): 1–41.
  57. Mislove, Alan. 2023. https://www.whitehouse.gov/ostp/news-updates/2023/08/29/red-teaming-large-language-models-to-identify-novel-ai-risks/.
  58. Morris, D. 2017. “Worlds first autonomous ship to launch in 2018.” http://fortune.com/2017/07/22/first-autonomous-ship-yara-birkeland/.
  59. Munim, Ziaul Haque. 2019. “Autonomous ships: a review, innovative applications and future maritime business models.” In Supply Chain Forum: An International Journal, Vol. 20, 266–279. Taylor & Francis.
  60. NCSC, Kate S. 2022. “Introducing our new machine learning security principles.” Aug. https://www.ncsc.gov.uk/blog-post/introducing-our-new-machine-learning-security-principles.
  61. NCSC, Martin R. 2023. “Thinking about the security of AI Systems.” https://www.ncsc.gov.uk/blog-post/thinking-about-security-ai-systems.
  62. Oliynyk, Daryna, Rudolf Mayer, and Andreas Rauber. 2023. “I know what you trained last summer: A survey on stealing machine learning models and defences.” ACM Computing Surveys .
  63. Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. “Distillation as a defense to adversarial perturbations against deep neural networks.” In 2016 IEEE symposium on security and privacy (SP), 582–597. IEEE.
  64. Pierazzi, Fabio, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. 2020. “Intriguing properties of adversarial ml attacks in the problem space.” In 2020 IEEE symposium on security and privacy (SP), 1332–1349. IEEE.
  65. Pintor, Maura, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, and Fabio Roli. 2023. “ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches.” Pattern Recognition 134: 109064.
  66. Porathe, Thomas, Johannes Prison, and Yemao Man. 2014. “Situation awareness in remote control centres for unmanned ships.” In Proceedings of Human Factors in Ship Design & Operation, 26-27 February 2014, London, UK, 93.
  67. Pozdniakov, Konstantin, Eduardo Alonso, Vladimir Stankovic, Kimberly Tam, and Kevin Jones. 2020. “Smart security audit: Reinforcement learning with a deep neural network approximator.” In 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 1–8. IEEE.
  68. Qiu, Shilin, Qihe Liu, Shijie Zhou, and Chunjiang Wu. 2019. “Review of artificial intelligence adversarial attack and defense technologies.” Applied Sciences 9 (5): 909.
  69. Ramirez, Miguel A, Song-Kyoo Kim, Hussam Al Hamadi, Ernesto Damiani, Young-Ji Byon, Tae-Yeon Kim, Chung-Suk Cho, and Chan Yeob Yeun. 2022. “Poisoning attacks and defenses on artificial intelligence: A survey.” arXiv preprint arXiv:2202.10276 .
  70. Saha, Aniruddha, Akshayvarun Subramanya, Koninika Patil, and Hamed Pirsiavash. 2020. “Role of spatial context in adversarial robustness for object detection.” In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 784–785.
  71. Samangouei, Pouya, Maya Kabkab, and Rama Chellappa. 2018. “Defense-gan: Protecting classifiers against adversarial attacks using generative models.” preprint arXiv:1805.06605 .
  72. Schwarzschild, Avi, Micah Goldblum, Arjun Gupta, John P Dickerson, and Tom Goldstein. 2021. “Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks.” In International Conference on Machine Learning, 9389–9398. PMLR.
  73. Seymour, John, and Philip Tully. 2016. “Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter.” Black Hat USA 37: 1–39.
  74. Shafahi, Ali, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. “Poison frogs! targeted clean-label poisoning attacks on neural networks.” Advances in neural information processing systems 31.
  75. Shaham, Uri, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, and Yuval Kluger. 2018. “Defending against adversarial images using basis functions transformations.” arXiv preprint arXiv:1803.10840 .
  76. Song, Dawn, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. 2018. “Physical adversarial examples for object detectors.” In 12th USENIX workshop on offensive technologies (WOOT 18), .
  77. Song, Junzhe, and Dmitry Namiot. 2022. “A Survey of the Implementations of Model Inversion Attacks.” In International Conference on Distributed Computer and Communication Networks, 3–16. Springer.
  78. Steinhardt, Jacob, Pang Wei W Koh, and Percy S Liang. 2017. “Certified defenses for data poisoning attacks.” Advances in neural information processing systems 30.
  79. Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. “One pixel attack for fooling deep neural networks.” IEEE Transactions on Evolutionary Computation 23 (5): 828–841.
  80. Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. “Intriguing properties of neural networks.” arXiv preprint arXiv:1312.6199 .
  81. Tam, K, K Forshaw, and KD Jones. 2019. “Cyber-SHIP: Developing Next Generation Maritime Cyber Research Capabilities.” International Conference on Marine Engineering and Technology Oman 2019 (ICMET Oman), Muscat, Oman. DOI: https://doi.org/10.24868/icmet.oman.2019.005.
  82. Tam, Kimberly, and Kevin Jones. 2018. “Cyber-risk assessment for autonomous ships.” In 2018 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), 1–8. IEEE.
  83. Tian, Zhiyi, Lei Cui, Jie Liang, and Shui Yu. 2022. “A comprehensive survey on poisoning attacks and countermeasures in machine learning.” ACM Computing Surveys 55 (8): 1–35.
  84. Tramèr, Florian, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. “Ensemble adversarial training: Attacks and defenses.” arXiv preprint arXiv:1705.07204 .
  85. Tsvetkova, Anastasia, and Magnus Hellström. 2022. “Creating value through autonomous shipping: an ecosystem perspective.” Maritime Economics & Logistics 1–23.
  86. Urbina, Fabio, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. 2022. “Dual use of artificial-intelligence-powered drug discovery.” Nature Machine Intelligence 4 (3): 189–191.
  87. Villegas-Ch, William, and Joselin García-Ortiz. 2023. “Toward a Comprehensive Framework for Ensuring Security and Privacy in Artificial Intelligence.” Electronics 12 (18): 3786.
  88. Walter, Mathew J, Aaron Barrett, David J Walker, and Kimberly Tam. 2023. “Adversarial AI testcases for maritime autonomous systems.” AI, Computer Science and Robotics Technology .
  89. Wang, Shuo, Surya Nepal, Carsten Rudolph, Marthie Grobler, Shangyu Chen, and Tianle Chen. 2020. “Backdoor attacks against transfer learning with pre-trained deep learning models.” IEEE Transactions on Services Computing 15 (3): 1526–1539.
  90. Wang, Sivy, Yucheng Shi, and Yahong Han. 2018. “Universal perturbation generation for black-box attack using evolutionary algorithms.” In 2018 24th International Conference on Pattern Recognition (ICPR), 1277–1282. IEEE.
  91. Wolf, Marty J, K Miller, and Frances S Grodzinsky. 2017. “Why we should have seen that coming: comments on Microsoft’s tay” experiment,” and wider implications.” Acm Sigcas Computers and Society 47 (3): 54–64.
  92. Wu, Tong, Tianhao Wang, Vikash Sehwag, Saeed Mahloujifar, and Prateek Mittal. 2022. “Just rotate it: Deploying backdoor attacks via rotation transformation.” In Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, 91–102.
  93. Xiang, Chong, and Prateek Mittal. 2021. “Detectorguard: Provably securing object detectors against localized patch hiding attacks.” In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 3177–3196.
  94. Yamin, Muhammad Mudassar, Mohib Ullah, Habib Ullah, and Basel Katt. 2021. “Weaponized AI for cyber attacks.” Journal of Information Security and Applications 57: 102722.
  95. Yoo, Jiwoon, and Yonghyun Jo. 2023. “Formulating Cybersecurity Requirements for Autonomous Ships Using the SQUARE Methodology.” Sensors 23 (11): 5033.
  96. Zhang, Jiale, Bing Chen, Xiang Cheng, Huynh Thi Thanh Binh, and Shui Yu. 2020. “PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems.” IEEE Internet of Things Journal 8 (5): 3310–3322.
  97. Zhao, Xuejun, Wencan Zhang, Xiaokui Xiao, and Brian Lim. 2021. “Exploiting explanations for model inversion attacks.” In Proceedings of the IEEE/CVF international conference on computer vision, 682–692.
  98. Ziajka-Poznańska, Ewelina, and Jakub Montewka. 2021. “Costs and benefits of autonomous shipping—a literature review.” Applied Sciences 11 (10): 4553.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Mathew J. Walter (2 papers)
  2. Aaron Barrett (10 papers)
  3. Kimberly Tam (2 papers)
Citations (1)