Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering (2209.04963v4)

Published 12 Sep 2022 in cs.AI, cs.CY, and cs.SE

Abstract: Responsible AI is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of AI. Recently, a number of AI ethics principles frameworks have been published. However, without further guidance on best practices, practitioners are left with nothing much beyond truisms. Also, significant efforts have been placed at algorithm-level rather than system-level, mainly focusing on a subset of mathematics-amenable ethical principles, such as fairness. Nevertheless, ethical issues can arise at any step of the development lifecycle, cutting across many AI and non-AI components of systems beyond AI algorithms and models. To operationalize responsible AI from a system perspective, in this paper, we present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR). Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle. The Responsible AI Pattern Catalogue classifies the patterns into three groups: multi-level governance patterns, trustworthy process patterns, and responsible-AI-by-design product patterns. These patterns provide systematic and actionable guidance for stakeholders to implement responsible AI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (130)
  1. A. Jobin, M. Ienca, and E. Vayena, “The global landscape of ai ethics guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019.
  2. J. Fjeld et. al., “Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for ai,” Berkman Klein Center Research Publication, no. 2020-1, 2020.
  3. Q. V. Liao, D. Gruen, and S. Miller, “Questioning the ai: Informing design practices for explainable ai user experiences,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ser. CHI ’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 1–15. [Online]. Available: https://doi.org/10.1145/3313831.3376590
  4. Q. V. Liao, M. Pribić, J. Han, S. Miller, and D. Sow, “Question-driven design process for explainable ai user experiences,” arXiv preprint arXiv:2104.03483, 2021.
  5. R. Larasati, A. De Liddo, and E. Motta, “Ai healthcare system interface: Explanation design for non-expert user trust,” in ACMIUI-WS 2021: Joint Proceedings of the ACM IUI 2021 Workshops, vol. 2903.   CEUR Workshop Proceedings, 2021.
  6. S.-H. Han and H.-J. Choi, “Checklist for validating trustworthy ai,” in 2022 IEEE International Conference on Big Data and Smart Computing (BigComp).   IEEE, 2022, pp. 391–394.
  7. I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, “Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 33–44.
  8. A. Jacovi, A. Marasović, T. Miller, and Y. Goldberg, “Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai,” in Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 2021, pp. 624–635.
  9. M. K. Ahuja, M.-B. Belaid, P. Bernabé, M. Collet, A. Gotlieb, C. Lal, D. Marijan, S. Sen, A. Sharif, and H. Spieker, “Opening the software engineering toolbox for the assessment of trustworthy ai,” arXiv preprint arXiv:2007.07768, 2020.
  10. B. Hutchinson, A. Smart, A. Hanna, E. Denton, C. Greer, O. Kjartansson, P. Barnes, and M. Mitchell, “Towards accountability for machine learning datasets: Practices from software engineering and infrastructure,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 560–575.
  11. Z. Zhou, Z. Li, Y. Zhang, and L. Sun, “Transparent-ai blueprint: Developing a conceptual tool to support the design of transparent ai agents,” International Journal of Human–Computer Interaction, pp. 1–28, 2022.
  12. D. Adkins, B. Alsallakh, A. Cheema, N. Kokhlikyan, E. McReynolds, P. Mishra, C. Procope, J. Sawruk, E. Wang, and P. Zvyagina, “Prescriptive and descriptive approaches to machine-learning transparency,” in CHI Conference on Human Factors in Computing Systems Extended Abstracts, 2022, pp. 1–9.
  13. K. Beck and W. Cunningham, “Using pattern languages for object oriented programs,” in Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 1987.
  14. B. A. Kitchenham and S. Charters, “Guidelines for performing systematic literature reviews in software engineering,” Tech. Rep., 2007.
  15. V. Garousi, M. Felderer, and M. V. Mäntylä, “Guidelines for including grey literature and conducting multivocal literature reviews in software engineering,” Information and Software Technology, vol. 106, pp. 101–121, 2019. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0950584918301939
  16. DISER (Australian Government), “Australia’s AI Ethics Principles,” https://industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles, 2020, accessed: 17 Aug 2022.
  17. B. Shneiderman, “Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered ai systems,” ACM Trans. Interact. Intell. Syst., vol. 10, no. 4, 2020.
  18. ——, “Responsible ai: Bridging from ethics to practice,” Communications of the ACM, vol. 64, no. 8, pp. 32–35, 2021.
  19. B. R. Jackson, Y. Ye, J. M. Crawford, M. J. Becich, S. Roy, J. R. Botkin, M. E. de Baca, and L. Pantanowitz, “The ethics of artificial intelligence in pathology and laboratory medicine: principles and practice,” Academic Pathology, vol. 8, p. 2374289521990784, 2021.
  20. J. Schaich Borg, “Four investment areas for ethical ai: Transdisciplinary opportunities to close the publication-to-practice gap,” Big Data & Society, vol. 8, no. 2, p. 20539517211040197, 2021.
  21. E. Papagiannidis, I. M. Enholm, C. Dremel, P. Mikalef, and J. Krogstie, “Deploying ai governance practices: A revelatory case study,” in Conference on e-Business, e-Services and e-Society.   Springer, 2021, pp. 208–219.
  22. J. C. Ibáñez and M. V. Olmeda, “Operationalising ai ethics: how are companies bridging the gap between practice and principles? an exploratory study,” AI & SOCIETY, pp. 1–25, 2021.
  23. V. Dignum, “Ensuring responsible ai in practice,” in Responsible Artificial Intelligence.   Springer, 2019, pp. 93–105.
  24. A. Zhobe, H. Jahankhani, R. Fong, P. Elevique, and H. Baajour, “The magic quadrant: Assessing ethical maturity for artificial intelligence,” in Cybersecurity, Privacy and Freedom Protection in the Connected World.   Springer, 2021, pp. 313–326.
  25. P. Fukas, J. Rebstadt, F. Remark, and O. Thomas, “Developing an artificial intelligence maturity model for auditing.” in ECIS, 2021.
  26. S. Alsheiabni, Y. Cheung, and C. Messom, “Towards an artificial intelligence maturity model: from science fiction to business facts,” 2019.
  27. A. Henriksen, S. Enni, and A. Bechmann, “Situated accountability: Ethical principles, certification standards, and explanation methods in applied ai,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 574–585.
  28. C. D. Martin and T. T. Makoundou, “Taking the high road ethics by design in ai,” ACM Inroads, vol. 8, no. 4, pp. 35–37, 2017.
  29. R. H. Yap, “Towards certifying trustworthy machine learning systems,” in International Workshop on the Foundations of Trustworthy AI Integrating Learning, Optimization and Reasoning.   Springer, 2020, pp. 77–82.
  30. P. Cihon, M. J. Kleinaltenkamp, J. Schuett, and S. D. Baum, “Ai certification: Advancing ethical practice by reducing information asymmetries,” IEEE Transactions on Technology and Society, vol. 2, no. 4, pp. 200–209, 2021.
  31. P. Boza and T. Evgeniou, “Implementing ai principles: Frameworks, processes, and tools,” 2021.
  32. D. D. Luxton, “Recommendations for the ethical use and design of artificial intelligent care providers,” Artificial intelligence in medicine, vol. 62, no. 1, pp. 1–10, 2014.
  33. M. D. McCradden, S. Joshi, J. A. Anderson, M. Mazwi, A. Goldenberg, and R. Zlotnik Shaul, “Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning,” Journal of the American Medical Informatics Association, vol. 27, no. 12, pp. 2024–2027, 2020.
  34. L. Zhu, X. Xu, Q. Lu, G. Governatori, and J. Whittle, “Ai and ethics—operationalizing responsible ai,” in Humanity Driven AI.   Springer, 2022, pp. 15–33.
  35. M. Sloane and J. Zakrzewski, “German ai start-ups and “ai ethics”: Using a social practice lens for assessing and implementing socio-technical innovation,” in 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 935–947.
  36. J. N. Hooker and T. W. N. Kim, “Toward non-intuition-based machine and artificial intelligence ethics: A deontological approach based on modal logic,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 130–136.
  37. M. d’Aquin, P. Troullinou, N. E. O’Connor, A. Cullen, G. Faller, and L. Holden, “Towards an” ethics by design” methodology for ai research projects,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 54–59.
  38. R. Benjamins, A. Barbado, and D. Sierra, “Responsible ai by design in practice,” arXiv preprint arXiv:1909.12838, 2019.
  39. R. Eitel-Porter, “Beyond the promise: implementing ethical ai,” AI and Ethics, vol. 1, no. 1, pp. 73–80, 2021.
  40. M. S. A. Lee and J. Singh, “Risk identification questionnaire for detecting unintended bias in the machine learning development lifecycle,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 704–714.
  41. F. Redmill, “Risk analysisa subjective process,” Engineering Management Journal, vol. 12, no. 2, pp. 91–96, 2002.
  42. M. D. Schultz and P. Seele, “Towards ai ethics’ institutionalization: knowledge bridges from business ethics to advance organizational ai ethics,” AI and Ethics, pp. 1–13, 2022.
  43. I. Barclay, A. Preece, I. Taylor, and D. Verma, “Towards traceability in data ecosystems using a bill of materials model,” in International Workshop on Science Gateways.   CEUR-WS, 2019.
  44. NTIA, “The Minimum Elements For a Software Bill of Materials (SBOM),” https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf, 2021, accessed 18 Aug 2022.
  45. D. Kasenberg, T. Arnold, and M. Scheutz, “Norms, rewards, and the intentional stance: Comparing machine learning approaches to ethical training,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 184–190.
  46. M. Sand, J. M. Durán, and K. R. Jongsma, “Responsibility beyond design: Physicians’ requirements for ethical medical ai,” Bioethics, vol. 36, no. 2, pp. 162–169, 2022.
  47. S. Amershi, A. Begel, C. Bird, R. DeLine, H. Gall, E. Kamar, N. Nagappan, B. Nushi, and T. Zimmermann, “Software engineering for machine learning: A case study,” in 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP).   IEEE, 2019, pp. 291–300.
  48. W. Hussain, M. Shahin, R. Hoda, J. Whittle, H. Perera, A. Nurwidyantoro, R. A. Shams, and G. Oliver, “How can human values be addressed in agilemethods a case study on safe,” IEEE Transactions on Software Engineering, 2022.
  49. Q. Lu, L. Zhu, X. Xu, J. Whittle, and Z. Xing, “Towards a roadmap on software engineering for responsible ai,” in 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), 2022, pp. 101–112.
  50. R. V. Zicari, J. Brusseau, S. N. Blomberg, H. C. Christensen, M. Coffee, M. B. Ganapini, S. Gerke, T. K. Gilbert, E. Hickman, E. Hildt et al., “On assessing trustworthy ai in healthcare. machine learning as a supportive tool to recognize cardiac arrest in emergency calls,” Frontiers in Human Dynamics, p. 30, 2021.
  51. K.-E. K. Bilstrup, M. H. Kaspersen, and M. G. Petersen, “Staging reflections on ethical dilemmas in machine learning: A card-based design workshop for high school students,” in Proceedings of the 2020 ACM Designing Interactive Systems Conference, 2020, pp. 1211–1222.
  52. R. V. Zicari, S. Ahmed, J. Amann, S. A. Braun, J. Brodersen, F. Bruneault, J. Brusseau, E. Campano, M. Coffee, A. Dengel et al., “Co-design of a trustworthy ai system in healthcare: deep learning based skin lesion classifier,” Frontiers in Human Dynamics, vol. 3, p. 688152, 2021.
  53. M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru, “Model cards for model reporting,” in Proceedings of the conference on fairness, accountability, and transparency, 2019, pp. 220–229.
  54. A. Wadhwani and P. Jain, “Machine learning model cards transparency review: Using model card toolkit,” in 2020 IEEE Pune Section International Conference (PuneCon).   IEEE, 2020, pp. 133–137.
  55. T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iii, and K. Crawford, “Datasheets for datasets,” Communications of the ACM, vol. 64, no. 12, pp. 86–92, 2021.
  56. M. Arnold, R. K. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. N. Ramamurthy, A. Olteanu, D. Piorkowski et al., “Factsheets: Increasing trust in ai services through supplier’s declarations of conformity,” IBM Journal of Research and Development, vol. 63, no. 4/5, pp. 6–1, 2019.
  57. D. Adkins, B. Alsallakh, A. Cheema, N. Kokhlikyan, E. McReynolds, P. Mishra, C. Procope, J. Sawruk, E. Wang, and P. Zvyagina, “Method cards for prescriptive machine-learning transparency,” in 2022 IEEE/ACM 1st International Conference on AI Engineering–Software Engineering for AI (CAIN).   IEEE, 2022, pp. 90–100.
  58. C. Ebert and M. Weyrich, “Validation of autonomous systems,” IEEE Software, vol. 36, no. 5, pp. 15–23, 2019.
  59. L. Gauerhof, R. Hawkins, C. Picardi, C. Paterson, Y. Hagiwara, and I. Habli, “Assuring the safety of machine learning for pedestrian detection at crossings,” in International Conference on Computer Safety, Reliability, and Security.   Springer, 2020, pp. 197–212.
  60. G. PAIR, “People + ai guidebook,” pair.withgoogle.com/guidebook, 2021, accessed: 17 Aug 2022.
  61. A. Vogelsang and M. Borg, “Requirements engineering for machine learning: Perspectives from data scientists,” in 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW).   IEEE, 2019, pp. 245–251.
  62. J. Horkoff, “Non-functional requirements for machine learning: Challenges and new directions,” in 2019 IEEE 27th international requirements engineering conference (RE).   IEEE, 2019, pp. 386–391.
  63. A. Bibal, M. Lognoul, A. De Streel, and B. Frénay, “Legal requirements on explainability in machine learning,” Artificial Intelligence and Law, vol. 29, no. 2, pp. 149–169, 2021.
  64. H. Perera, R. Hoda, R. A. Shams, A. Nurwidyantoro, M. Shahin, W. Hussain, and J. Whittle, “The impact of considering human values during requirements engineering activities,” arXiv preprint arXiv:2111.15293, 2021.
  65. I. Society, P. Bourque, and R. Fairley, “Guide to the software engineering body of knowledge (swebok (r)),” 2014.
  66. E. Halme, V. Vakkuri, J. Kultanen, M. Jantunen, K.-K. Kemell, R. Rousi, and P. Abrahamsson, “How to write ethical user stories? impacts of the eccola method,” in International Conference on Agile Software Development.   Springer, Cham, 2021, pp. 36–52.
  67. H. Muccini and K. Vaidhyanathan, “Software architecture for ml-based systems: what exists and what lies ahead,” in 2021 IEEE/ACM 1st Workshop on AI Engineering-Software Engineering for AI (WAIN).   IEEE, 2021, pp. 121–128.
  68. S. K. Lo, Q. Lu, H.-Y. Paik, and L. Zhu, “Flra: A reference architecture for federated learning systems,” in European Conference on Software Architecture.   Springer, 2021, pp. 83–98.
  69. G. A. Lewis, I. Ozkaya, and X. Xu, “Software architecture challenges for ml systems,” in 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME).   IEEE, 2021, pp. 634–638.
  70. S. Umbrello, “The role of engineers in harmonising human values for ai systems design,” Journal of Responsible Technology, vol. 10, p. 100031, 2022.
  71. M. Takeda, Y. Hirata, Y.-H. Weng, T. Katayama, Y. Mizuta, and A. Koujina, “Accountable system design architecture for embodied ai: a focus on physical human support robots,” Advanced Robotics, vol. 33, no. 23, pp. 1248–1263, 2019.
  72. B. Fish and L. Stark, “Reflexive design for fairness and other human values in formal models,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 89–99.
  73. I. Naja, M. Markovic, P. Edwards, and C. Cottrill, “A semantic framework to support ai system accountability and audit,” in European Semantic Web Conference.   Springer, 2021, pp. 160–176.
  74. P. Ayranci, P. Lai, N. Phan, H. Hu, A. Kolinowski, D. Newman, and D. Dou, “Onml: an ontology-based approach for interpretable machine learning,” Journal of Combinatorial Optimization, pp. 1–24, 2022.
  75. K. Sekiguchi and K. Hori, “Organic and dynamic tool for use with knowledge base of ai ethics for promoting engineers’ practice of ethical ai design,” AI & SOCIETY, vol. 35, no. 1, pp. 51–71, 2020.
  76. M. Anderson and S. L. Anderson, “Geneth: A general ethical dilemma analyzer,” Paladyn, Journal of Behavioral Robotics, vol. 9, no. 1, pp. 337–357, 2018.
  77. V. Singh, S. K. S. Hari, T. Tsai, and M. Pitale, “Simulation driven design and test for safety of ai based autonomous vehicles,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 122–128.
  78. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on robot learning.   PMLR, 2017, pp. 1–16.
  79. C. Regli and B. Annighoefer, “An anthropomorphic approach to establish an additional layer of trustworthiness of an ai pilot,” in Software Engineering 2022 Workshops.   Gesellschaft für Informatik eV, 2022.
  80. S. F. Jentzsch, S. Höhn, and N. Hochgeschwender, “Conversational interfaces for explainable ai: a human-centred approach,” in International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems.   Springer, 2019, pp. 77–92.
  81. F. Hussain, R. Hussain, B. Noye, and S. Sharieh, “Enterprise api security and gdpr compliance: Design and implementation perspective,” IT Professional, vol. 22, no. 5, pp. 81–89, 2020.
  82. H. J. Pandit, D. O’Sullivan, and D. Lewis, “Towards knowledge-based systems for gdpr compliance.” in CKGSemStats@ ISWC, 2018.
  83. M. Fan, L. Yu, S. Chen, H. Zhou, X. Luo, S. Li, Y. Liu, J. Liu, and T. Liu, “An empirical evaluation of gdpr compliance violations in android mhealth apps,” in 2020 IEEE 31st international symposium on software reliability engineering (ISSRE).   IEEE, 2020, pp. 253–264.
  84. E. Mitchell, P. Henderson, C. D. Manning, D. Jurafsky, and C. Finn, “Self-destructing models: Increasing the costs of harmful dual uses in foundation models,” in First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022, 2002.
  85. T. Shevlane, “Structured access to ai capabilities: an emerging paradigm for safe ai deployment,” arXiv preprint arXiv:2201.05159, 2022.
  86. N. Six, A. Perrichon-Chrétien, and N. Herbaut, “Saiaas: A blockchain-based solution for secure artificial intelligence as-a-service,” in The International Conference on Deep Learning, Big Data and Blockchain.   Springer, 2021, pp. 67–74.
  87. A. Chattopadhyay, A. Ali, and D. Thaxton, “Assessing the alignment of social robots with trustworthy ai design guidelines: A preliminary research study,” in Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, 2021, pp. 325–327.
  88. W. Xie and P. Wu, “Fairness testing of machine learning models using deep reinforcement learning,” in 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom).   IEEE, 2020, pp. 121–128.
  89. A. Aggarwal, P. Lohia, S. Nagar, K. Dey, and D. Saha, “Black box fairness testing of machine learning models,” in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 625–635.
  90. R. B. L. Dixon, “A principled governance for emerging ai regimes: lessons from china, the european union, and the united states,” AI and Ethics, pp. 1–18, 2022.
  91. C. Murphy, G. E. Kaiser, and M. Arias, “An approach to software testing of machine learning applications,” 2007.
  92. N. J. Goodall, “Machine ethics and automated vehicles,” in Road vehicle automation.   Springer, 2014, pp. 93–102.
  93. N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,” ACM Computing Surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021.
  94. S. Martínez-Fernández, X. Franch, A. Jedlitschka, M. Oriol, and A. Trendowicz, “Developing and operating artificial intelligence models in trustworthy autonomous systems,” in International Conference on Research Challenges in Information Science.   Springer, 2021, pp. 221–229.
  95. T. Hirsch, K. Merced, S. Narayanan, Z. E. Imel, and D. C. Atkins, “Designing contestability: Interaction design, machine learning, and mental health,” in Proceedings of the 2017 Conference on Designing Interactive Systems, 2017, pp. 95–99.
  96. M. M. John, H. Holmström Olsson, and J. Bosch, “Architecting ai deployment: A systematic review of state-of-the-art and state-of-practice literature,” in International Conference on Software Business.   Springer, 2020, pp. 14–29.
  97. D. Schiff, B. Rakova, A. Ayesh, A. Fanti, and M. Lennon, “Principles to practices for responsible ai: closing the gap,” arXiv preprint arXiv:2006.04707, 2020.
  98. R. V. Zicari, J. Brodersen, J. Brusseau, B. Düdder, T. Eichhorn, T. Ivanov, G. Kararigas, P. Kringen, M. McCullough, F. Möslein et al., “Z-inspection®: a process to assess trustworthy ai,” IEEE Transactions on Technology and Society, vol. 2, no. 2, pp. 83–97, 2021.
  99. J. Henderson, S. Sharma, A. Gee, V. Alexiev, S. Draper, C. Marin, Y. Hinojosa, C. Draper, M. Perng, L. Aguirre et al., “Certifai: a toolkit for building trust in ai systems,” in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 5249–5251.
  100. M. Staples, L. Zhu, and J. Grundy, “Continuous validation for data analytics systems,” in Proceedings of the 38th International Conference on Software Engineering Companion, 2016, pp. 769–772.
  101. S. K. Lo, Q. Lu, L. Zhu, H.-y. Paik, X. Xu, and C. Wang, “Architectural patterns for the design of federated learning systems,” Journal of Systems and Software, vol. 191, p. 111357, 2022.
  102. T. U. S. D. of Commerce, “The minimum elements for a software bill of materials (sbom),” https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf, 2021, accessed 17 Aug 2022.
  103. I. Barclay, A. Preece, I. Taylor, S. K. Radha, and J. Nabrzyski, “Providing assurance and scrutability on shared data and machine learning models with verifiable credentials,” Concurrency and Computation: Practice and Experience, p. e6997, 2022.
  104. W. Chu, “A decentralized approach towards responsible ai in social ecosystems,” in Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, 2022, pp. 79–89.
  105. M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber, “Capability maturity model, version 1.1,” IEEE software, vol. 10, no. 4, pp. 18–27, 1993.
  106. W. W. W. Consortium et al., “Verifiable credentials data model 1.0: Expressing verifiable information on the web,” https://www. w3. org/TR/vc-data-model/?# core-data-model, 2019.
  107. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning,” in proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191.
  108. A. A. Süzen and M. A. Şimşek, “A novel approach to machine learning application to protection privacy data in healthcare: Federated learning,” Namık Kemal Tıp Dergisi, vol. 8, no. 1, pp. 22–30, 2020.
  109. S. Bennati and C. M. Jonker, “Primal: A privacy-preserving machine learning method for event detection in distributed sensor networks,” arXiv preprint arXiv:1703.07150, 2017.
  110. W. Verachtert, T. J. Ashby, I. Chakroun, R. Wuyts, S. Das, S. Halder, and P. Leray, “Privacy preserving amalgamated machine learning for process control,” in Metrology, Inspection, and Process Control for Semiconductor Manufacturing XXXV, vol. 11611.   SPIE, 2021, pp. 329–341.
  111. N. Sugianto, D. Tjondronegoro, R. Stockdale, and E. I. Yuwono, “Privacy-preserving ai-enabled video surveillance for social distancing: Responsible design and deployment for public spaces,” Information Technology & People, 2021.
  112. S. K. Lo, Y. Liu, Q. Lu, C. Wang, X. Xu, H.-Y. Paik, and L. Zhu, “Blockchain-based trustworthy federated learning architecture,” arXiv preprint arXiv:2108.06912, 2021.
  113. S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan, S. Mukherjee, V. Garg, R. Sarveswara, K. Händler, P. Pickkers, N. A. Aziz et al., “Swarm learning for decentralized and confidential clinical machine learning,” Nature, vol. 594, no. 7862, pp. 265–270, 2021.
  114. P. Vassilakopoulou, “Sociotechnical approach for accountability by design in ai systems.” in ECIS, 2020.
  115. Microsoft, “Microsoft hax toolkit,” https://www.microsoft.com/en-us/haxtoolkit/, 2022, accessed: 17 Aug 2022.
  116. J. Dai, S. Lei, L. Dong, X. Lin, H. Zhang, D. Sun, and K. Yuan, “More reliable ai solution: Breast ultrasound diagnosis using multi-ai combination,” arXiv preprint arXiv:2101.02639, 2021.
  117. M. Nafreen, S. Bhattacharya, and L. Fiondella, “Architecture-based software reliability incorporating fault tolerant machine learning,” in 2020 Annual Reliability and Maintainability Symposium (RAMS).   IEEE, 2020, pp. 1–6.
  118. J. C. Knight, “N-version programming,” Encyclopedia of Software Engineering, 2002.
  119. A. Lavaei, B. Zhong, M. Caccamo, and M. Zamani, “Towards trustworthy ai: safe-visor architecture for uncertified controllers in stochastic cyber-physical systems,” in Proceedings of the Workshop on Computation-Aware Algorithmic Design for Cyber-Physical Systems, 2021, pp. 7–8.
  120. I. Esnaola-Gonzalez, “An ontology-based approach for making machine learning systems accountable,” 2021.
  121. M. Shafto, M. Conroy, R. Doyle, E. Glaessgen, C. Kemp, J. LeMoigne, and L. Wang, “Modeling, simulation, information technology & processing roadmap,” National Aeronautics and Space Administration, vol. 32, no. 2012, pp. 1–38, 2012.
  122. J. Weng, J. Weng, J. Zhang, M. Li, Y. Zhang, and W. Luo, “Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive,” IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 5, pp. 2438–2455, 2019.
  123. W. Zhang, Q. Lu, Q. Yu, Z. Li, Y. Liu, S. K. Lo, S. Chen, X. Xu, and L. Zhu, “Blockchain-based federated learning for device failure detection in industrial iot,” IEEE Internet of Things Journal, vol. 8, no. 7, pp. 5926–5937, 2020.
  124. G. Falco and J. E. Siegel, “A distributedblack box’audit trail design specification for connected and automated vehicle data and software assurance,” arXiv preprint arXiv:2002.02780, 2020.
  125. A. F. Winfield and M. Jirotka, “The case for an ethical black box,” in Annual Conference Towards Autonomous Robotic Systems.   Springer, 2017, pp. 262–273.
  126. G. Falco, B. Shneiderman, J. Badger, R. Carrier, A. Dahbura, D. Danks, M. Eling, A. Goodloe, J. Gupta, C. Hart et al., “Governing ai safety through independent audits,” Nature Machine Intelligence, vol. 3, no. 7, pp. 566–571, 2021.
  127. B. S. Miguel, A. Naseer, and H. Inakoshi, “Putting accountability of ai systems into practice,” in Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 5276–5278.
  128. OECD, “Tools for trustworthy ai,” 2021. [Online]. Available: https://www.oecd-ilibrary.org/content/paper/008232ec-en
  129. K. Smit, M. Zoet, and J. van Meerten, “A review of ai principles in practice,” 2020.
  130. M. Anagnostou, O. Karvounidou, C. Katritzidaki, C. Kechagia, K. Melidou, E. Mpeza, I. Konstantinidis, E. Kapantai, C. Berberidis, I. Magnisalis et al., “Characteristics and challenges in the industries towards responsible ai: a systematic literature review,” Ethics and Information Technology, vol. 24, no. 3, pp. 1–18, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Qinghua Lu (100 papers)
  2. Liming Zhu (101 papers)
  3. Xiwei Xu (87 papers)
  4. Jon Whittle (32 papers)
  5. Didar Zowghi (25 papers)
  6. Aurelie Jacquet (2 papers)
Citations (31)