Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain (2405.16311v1)
Abstract: Explainability and transparency of AI systems are undeniably important, leading to several research studies and tools addressing them. Existing works fall short of accounting for the diverse stakeholders of the AI supply chain who may differ in their needs and consideration of the facets of explainability and transparency. In this paper, we argue for the need to revisit the inquiries of these vital constructs in the context of LLMs. To this end, we report on a qualitative study with 71 different stakeholders, where we explore the prevalent perceptions and needs around these concepts. This study not only confirms the importance of exploring the who'' in XAI and transparency for LLMs, but also reflects on best practices to do so while surfacing the often forgotten stakeholders and their information needs. Our insights suggest that researchers and practitioners should simultaneously clarify the
who'' in considerations of explainability and transparency, the what'' in the information needs, and
why'' they are needed to ensure responsible design and development across the LLM supply chain.
- Moritz Altenried. 2020. The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class 44, 2 (2020), 145–158.
- Ariful Islam Anik and Andrea Bunt. 2021. Data-centric explanations: explaining training data of machine learning systems to promote transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
- How can Explainability Methods be Used to Support Bug Identification in Computer Vision Models?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–16.
- Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 648–657.
- The foundation model transparency index. arXiv preprint arXiv:2310.12941 (2023).
- Andrea Brennen. 2020. What Do People Really Want When They Say They Want" Explainable AI?" We Asked 60 Stakeholders.. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–7.
- Understanding accountability in algorithmic supply chains. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1186–1197.
- Adaptation of AI Explanations to Users’ Roles. In Workshop on Human-Centered Explainable AI (@ CHI 2023).
- Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. 1591–1602.
- Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
- The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021).
- Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
- Understanding Machine Learning Practitioners’ Data Documentation Perceptions, Needs, Challenges, and Desiderata. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–29.
- "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17.
- What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
- Uli Meyer and Ingo Schulz-Schaeffer. 2006. Three forms of interpretative flexibility. (2006).
- Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
- Milda Norkute. 2021. AI explainability: Why one explanation cannot fit all. In ACM CHI Workshop on Operationalizing Human-Centered Perspectives in Explainable AI (HCXAI).
- Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns 2, 11 (2021).
- Evaluating a methodology for increasing AI transparency: A case study. arXiv preprint arXiv:2201.13224 (2022).
- Stakeholders in explainable AI. arXiv preprint arXiv:1810.00184 (2018).
- A methodology for creating AI FactSheets. arXiv preprint arXiv:2006.13796 (2020).
- Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825 (2017).
- Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: A framework for systematic assessment of explainable approaches. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 56–67.
- Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
- David Gray Widder and Dawn Nafus. 2023. Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society 10, 1 (2023), 20539517231177620.
- Generating Process-Centric Explanations to Enable Contestability in Algorithmic Decision-Making: Challenges and Opportunities. arXiv preprint arXiv:2305.00739 (2023).
- Agathe Balayn (12 papers)
- Lorenzo Corti (5 papers)
- Fanny Rancourt (3 papers)
- Fabio Casati (35 papers)
- Ujwal Gadiraju (28 papers)