Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for AI Accountability (2311.13158v3)
Abstract: AI, particularly through the advent of large-scale generative AI (GenAI) models such as LLMs, has become a transformative element in contemporary technology. While these models have unlocked new possibilities, they simultaneously present significant challenges, such as concerns over data privacy and the propensity to generate misleading or fabricated content. Current frameworks for Responsible AI (RAI) often fall short in providing the granular guidance necessary for tangible application, especially for Accountability-a principle that is pivotal for ensuring transparent and auditable decision-making, bolstering public trust, and meeting increasing regulatory expectations. This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue, formulated through a systematic multivocal literature review (MLR) that integrates findings from both academic and grey literature. Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems. This tripartite framework is designed to operationalize Accountability in AI, with a special emphasis on addressing the intricacies of GenAI.
- Managing ai risks in an era of rapid progress. arXiv preprint arXiv:2310.17688, 2023.
- Tag your fish in the broken net: A responsible web framework for protecting online privacy and copyright. arXiv preprint arXiv:2310.07915, 2023.
- The subjects and stages of ai dataset development: A framework for dataset accountability. 2022.
- The foundation model transparency index. arXiv preprint arXiv:2310.12941, 2023.
- OpenAI. Gpt-4 system card. Technical report, OpenAI, 2023.
- The White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/, 2023.
- A matrix for selecting responsible ai frameworks, 2023.
- Towards concrete and connected ai risk assessment (c22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPTaira): A systematic mapping study. In 2023 IEEE/ACM 2nd International Conference on AI Engineering–Software Engineering for AI (CAIN), pages 104–116. IEEE, 2023.
- Accountability Principles for Artificial Intelligence (AP4AI) in the Internal Security Domain: AP4AI Framework Blueprint. Europol Innovation Lab, 2022.
- European Commission. Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, 2021.
- The White House Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/ostp/ai-bill-of-rights/, 2023.
- Closing the ai accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 33–44, 2020.
- Ai regulation is (not) all you need. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 1267–1279, 2023.
- capai-a procedure for conducting conformity assessment of ai systems in line with the eu artificial intelligence act. Available at SSRN 4064091, 2022.
- Software metrics: a rigorous and practical approach. CRC press, 2014.
- Amanda Sinclair. The chameleon of accountability: Forms and discourses. Accounting, organizations and Society, 20(2-3):219–237, 1995.
- Situated accountability: Ethical principles, certification standards, and explanation methods in applied ai. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 574–585, 2021.
- Thomson Reuters Practical Law. Accountability principle. https://uk.practicallaw.thomsonreuters.com/w-014-8164?transitionType=Default&contextData=(sc.Default).
- Mark Bovens. Analysing and assessing accountability: A conceptual framework 1. European law journal, 13(4):447–468, 2007.
- Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY, pages 1–12, 2023.
- Accountable ai for healthcare iot systems. In 2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA), pages 20–28. IEEE, 2022.
- Ai documentation: A path to accountability. Journal of Responsible Technology, 11:100043, 2022.
- Understanding accountability in algorithmic supply chains. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 1186–1197, 2023.
- The global landscape of ai ethics guidelines. Nature machine intelligence, 1(9):389–399, 2019.
- Measuring responsible artificial intelligence (rai) in banking: a valid and reliable instrument. AI and Ethics, pages 1–19, 2023.
- Microsoft. Responsible ai principles and approach, n.d.
- OECD. Oecd ai principles overview, n.d.
- Australia’s AI Ethics Principles.
- AI Risk Management Framework (AI RMF 1.0), 2023.
- European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). Assessment list for trustworthy artificial intelligence (altai) for self-assessment, 2020.
- Helen Nissenbaum. Accountability in a computerized society. Science and engineering ethics, 2:25–42, 1996.
- Accountability in an algorithmic society: relationality, responsibility, and robustness in machine learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 864–876, 2022.
- European Parliament. Artificial Intelligence Liability Directive, 2023.
- AI Verify Foundation. What is AI Verify?, 2023.
- Credo AI. Credo ai. https://www.credo.ai/, 2023.
- A taxonomy of trustworthiness for artificial intelligence. https://cltc.berkeley.edu/publication/a-taxonomy-of-trustworthiness-for-artificial-intelligence/, 2023.
- German Federal Office for Information Security (BSI). Ai cloud service compliance criteria catalogue (aic4). https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/CloudComputing/AIC4/AI-Cloud-Service-Compliance-Criteria-Catalogue_AIC4.html, 2021.
- Accountability of ai under the law: The role of explanation. arXiv preprint arXiv:1711.01134, 2017.
- Ai accountability: Approaches, affecting factors, and challenges. Computer, 56(4):61–70, 2023.
- Helen Smith. Clinical ai: opacity, accountability, responsibility and liability. Ai & Society, 36(2):535–545, 2021.
- Madalina Busuioc. Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5):825–836, 2021.
- A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12(3):1353, 2022.
- A clarification of the nuances in the fairness metrics landscape. Scientific Reports, 12(1):4209, 2022.
- An ontology for fairness metrics. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 265–275, 2022.
- US National Institute of Standards and Technology (NIST). AI Measurement and Evaluation. https://www.nist.gov/ai-measurement-and-evaluation.
- OECD.AI. Catalogue of Tools & Metrics for Trustworthy AI. https://oecd.ai/en/catalogue/metrics, 2023.
- Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Information and Software Technology, 106:101–121, Feb 2019.
- Systematic literature reviews in software engineering–a systematic literature review. Information and software technology, 51(1):7–15, 2009.
- Qb4aira: A question bank for ai risk assessment. arXiv preprint arXiv:2305.09300, 2023.
- Government of Canada. Algorithmic Impact Assessment tool , 2023.
- Australia NSW Government. NSW Artificial Intelligence Assurance Framework, 2022.
- Microsoft. Microsoft Responsible AI Impact Assessment Template, 2022.
- Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International journal of qualitative methods, 5(1):80–92, 2006.
- OECD. Advancing accountability in ai. (349), 2023.
- Jeanna Matthews. Patterns and anti-patterns, principles and pitfalls: accountability and transparency in ai. AI Magazine, 41(1):82–89, 2020.
- Microsoft. Microsoft responsible ai standard v2: General requirements, 2022.
- U.S Department of Energy. Doe ai risk management playbook (airmp).
- From principles to practice. an interdisciplinary framework to operationalise ai ethics. 2020.
- Monitoring misuse for accountable’artificial intelligence as a service’. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 300–306, 2020.
- OpenAI. Openai’s response to ntia request for comment: Artificial intelligence accountability, 2023.
- Fairness, accountability, transparency, and ethics (fate) in artificial intelligence (ai), and higher education: A systematic review. Computers and Education: Artificial Intelligence, page 100152, 2023.
- Generative ai and chatgpt: Applications, challenges, and ai-human collaboration, 2023.
- Hong Kong Privacy Commissioner for Personal Data. Guidance on the ethical development and use of artificial intelligence, 2021.
- Reid Blackman. Why you need an ai ethics committee. Harvard Business Review, 2022.
- Microsoft. Microsoft response to ai accountability policy request for comment, ntia-2023–07776, 2023.
- U.K. Information Commissioner’s Office (ICO). Guidance on the ai auditing framework: Draft guidance for consultation, 2020.
- Educating software and ai stakeholders about algorithmic fairness, accountability, transparency and ethics. International Journal of Artificial Intelligence in Education, pages 1–26, 2021.
- Google DeepMind. Google’s response to ntia request for comment: Artificial intelligence accountability, 2023.
- “i would like to design”: Black girls analyzing and ideating fair and accountable ai. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–14, 2023.
- IEEE Standards Association. IEEE portfolio of AIS technology and impact standards and standards projects. https://standards.ieee.org/initiatives/autonomous-intelligence-systems/standards/.
- Responsible ai maturity model. Technical Report MSR-TR-2023-26, Microsoft Research, 2023.
- Responsible ai pattern catalogue: A collection of best practices for ai governance and engineering. ACM Comput. Surv., 2023.
- The accountability fabric: A suite of semantic tools for managing ai system accountability and audit. In CEUR Workshop Proceedings, 2021.
- Using knowledge graphs to unlock practical collection, integration, and audit of ai accountability information. IEEE Access, 10:74383–74411, 2022.
- Towards a metadata management system for provenance, reproducibility and accountability in federated machine learning. In European Conference on Service-Oriented and Cloud Computing, pages 5–18. Springer, 2022.
- System cards for ai-based decision-making for public policy. arXiv preprint arXiv:2203.04754, 2022.
- Challenges and future directions for accountable machine learning. In CEUR Workshop Proceedings, volume 2894, pages 40–47. CEUR-WS, 2021.
- Accuracy-efficiency trade-offs and accountability in distributed ml systems. In Equity and Access in Algorithms, Mechanisms, and Optimization, pages 1–11. 2021.
- Microsoft. Log and monitor azure openai. https://learn.microsoft.com/en-us/azure/architecture/ai-ml/openai/architecture/log-monitor-azure-openai, 2023.
- Innovation, Science and Economic Development Canada. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems, 2023.
- Operationalising ai governance through ethics-based auditing: an industry case study. AI and Ethics, 3(2):451–468, 2023.
- Ai audit washing and accountability. Available at SSRN 4227350, 2022.
- Ramya Srinivasan and Beatriz San Miguel González. The role of empathy for artificial intelligence accountability. Journal of Responsible Technology, 9:100021, 2022.
- Towards accountability in the use of artificial intelligence for public administrations. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 757–766, 2021.
- Transparent, explainable, and accountable ai for robotics. Science robotics, 2(6):eaan6080, 2017.
- UK Information Commissioner’s Office (ICO). A guide to ai audits. https://ico.org.uk/media/for-organisations/documents/4022651/a-guide-to-ai-audits.pdf.
- Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021.
- Data readiness report. In 2021 IEEE International Conference on Smart Data Services (SMDS), pages 42–51. IEEE, 2021.
- Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, pages 220–229, 2019.
- Leakage and the reproducibility crisis in ml-based science. arXiv preprint arXiv:2207.07048, 2022.
- Reward reports for reinforcement learning. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 84–130, 2023.
- Factsheets: Increasing trust in ai services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5):6–1, 2019.
- An empirical study on software bill of materials: Where we stand and the road ahead. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pages 2630–2642, 2023.
- Trust in software supply chains: Blockchain-enabled sbom and the aibom future. arXiv preprint arXiv:2307.02088, 2023.
- Toward trustworthy ai: Blockchain-based architecture design for accountability and fairness of federated learning systems. IEEE Internet of Things Journal, 10(4):3276–3284, 2022.
- Decentralised governance for foundation model based systems: Exploring the role of blockchain in responsible ai. arXiv preprint arXiv:2308.05962, 2023.
- A semantic framework to support ai system accountability and audit. In The Semantic Web: 18th International Conference, ESWC 2021, Virtual Event, June 6–10, 2021, Proceedings 18, pages 160–176. Springer, 2021.
- OpenAI. Production best practices. https://platform.openai.com/docs/guides/production-best-practices/improving-latencies.
- Tesla. Autopilot. https://www.tesla.com/autopilot.
- Boming Xia (14 papers)
- Qinghua Lu (100 papers)
- Liming Zhu (101 papers)
- Sung Une Lee (8 papers)
- Yue Liu (256 papers)
- Zhenchang Xing (99 papers)