Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions (2401.13324v6)

Published 24 Jan 2024 in cs.HC and cs.AI

Abstract: Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system's decisions, such as domain experts and decision subjects. To address this, we present the "XAI Novice Question Bank," an extension of the XAI Question Bank containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task-based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants' confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants' prior perceptions of the systems' risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system's deployment, while those who perceived low risks rather asked about the system's operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (102)
  1. Mark S. Ackerman. 2000. The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. Human–Computer Interaction 15, 2-3 (2000), 179–203. https://doi.org/10.1207/S15327051HCI1523_5
  2. Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  3. Contestable AI by Design: Towards a Framework. Minds and Machines (13 Aug 2022). https://doi.org/10.1007/s11023-022-09611-z
  4. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion 99 (Nov. 2023), 101805. https://doi.org/10.1016/j.inffus.2023.101805
  5. Doris Allhutter. 2021. Ein Algorithmus zur effizienten Förderung der Chancen auf dem Arbeitsmarkt? WISO – Zeitschrift für Sozial- und Wirtschaftswissenschaften 44.JG (2021), 82–95.
  6. Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Frontiers in Big Data 3 (2020). https://doi.org/10.3389/fdata.2020.00005
  7. DER AMS-ALGORITHMUS: Eine Soziotechnische Analyse des Arbeitsmarktchancen-Assistenz-Systems (AMAS). Technical Report. https://epub.oeaw.ac.at/0xc1aa5576_0x003bfdf3.pdf
  8. Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3 (March 2018), 973–989. https://doi.org/10.1177/1461444816676645
  9. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY 35, 3 (01 Sep 2020), 611–623. https://doi.org/10.1007/s00146-019-00931-w
  10. Ricardo Baeza-Yate and Jeanna Matthews. 2022. Statement on Principles for Responsible Algorithmic Systems. https://www.acm.org/binaries/content/assets/public-policy/final-joint-ai-statement-update.pdf
  11. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  12. What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science. In Explaining Understanding: New Perspectives from Epistemolgy and Philosophy of Science, Stephen Grimm, Christoph Baumberger, and Sabine Ammon (Eds.). Routledge, 1–34.
  13. Or Biran and Courtenay V. Cotton. 2017. Explanation and Justification in Machine Learning : A Survey Or.
  14. Or Biran and Kathleen McKeown. 2017. Human-Centric Justification of Machine Learning Predictions. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. 1461–1467. https://doi.org/10.24963/ijcai.2017/202
  15. Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
  16. Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300271
  17. Who Do We Mean When We Talk About Visualization Novices?. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 819, 16 pages. https://doi.org/10.1145/3544548.3581524
  18. Proxy tasks and subjective measures can be misleading in evaluating explainable AI systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 454–464. https://doi.org/10.1145/3377325.3377498
  19. Ruth M.J. Byrne. 2023. Good Explanations in Explainable Artificial Intelligence (XAI): Evidence from Human Explanatory Reasoning. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, Macau, SAR China, 6536–6544. https://doi.org/10.24963/ijcai.2023/733
  20. Tara Capel and Margot Brereton. 2023. What is Human-Centered about Human-Centered AI? A Map of the Research Landscape. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–23. https://doi.org/10.1145/3544548.3580959
  21. Domesticating Social Alarm Systems in Nursing Homes: Qualitative Study of Differences in the Perspectives of Assistant Nurses. J Med Internet Res 25 (5 May 2023), e44692. https://doi.org/10.2196/44692
  22. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1–12. https://doi.org/10.1145/3290605.3300789
  23. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 307–317. https://doi.org/10.1145/3397481.3450644
  24. European Commission. 2021. Laying Down Harmonised Rules on Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence
  25. Toward personalized XAI: A case study in intelligent tutoring systems. Artificial Intelligence 298 (Sept. 2021), 103503. https://doi.org/10.1016/j.artint.2021.103503
  26. Swiss Federal Council. 2023. Popular vote. https://www.admin.ch/gov/en/start/documentation/votes.html. Official website of the Swiss Federal Council describing the popular vote. Last accessed: 14.10.2023.
  27. Karl de Fine Licht and Jenny de Fine Licht. 2020. Artificial Intelligence, Transparency, and Public Decision-Making: Why Explanations Are Key When Trying to Produce Perceived Legitimacy. AI Soc. 35, 4 (dec 2020), 917–926. https://doi.org/10.1007/s00146-020-00960-w
  28. Daniel Dennett. 2006. Intentional Systems Theory. In The Oxford Handbook of Philosophy of Mind, Brian McLaughlin, Ansgar Beckermann, and Sven Walter (Eds.). Oxford University Press.
  29. Daniel Clement Dennett. 1998. The intentional stance (7. printing ed.). MIT Press, Cambridge, Mass.
  30. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. ACM, Virtual Event USA, 1591–1602. https://doi.org/10.1145/3461778.3462131
  31. Eleanor Duckworth (Ed.). 2001. ”Tell me more”: listening to learners explain. Teachers College Press, New York.
  32. Expanding Explainability: Towards Social Transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–19. https://doi.org/10.1145/3411764.3445188
  33. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 34 (apr 2023), 32 pages. https://doi.org/10.1145/3579467
  34. Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. ACM, New Orleans LA USA, 1–7. https://doi.org/10.1145/3491101.3503727
  35. European Parliament. Directorate General for Parliamentary Research Services. 2019. Understanding algorithmic decision-making: opportunities and challenges. Publications Office, LU. https://data.europa.eu/doi/10.2861/536131
  36. Andrea Ferrario and Michele Loi. 2022. How Explainability Contributes to Trust in AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 1457–1466. https://doi.org/10.1145/3531146.3533202
  37. Luciano Floridi. 2019. Establishing the rules for building trustworthy AI. Nature Machine Intelligence 1, 6 (June 2019), 261–262. https://doi.org/10.1038/s42256-019-0055-y
  38. Timo Freiesleben and Gunnar König. 2023. Dear XAI Community, We Need to Talk!. In Explainable Artificial Intelligence, Luca Longo (Ed.). Springer Nature Switzerland, Cham, 48–65.
  39. To Be High-Risk, or Not To Be—Semantic Specifications and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, Chicago IL USA, 905–915. https://doi.org/10.1145/3593013.3594050
  40. Stephen R. Grimm. 2019. Varieties of Understanding. In Varieties of Understanding. Oxford University Press, 1–14. https://doi.org/10.1093/oso/9780190860974.003.0001
  41. Thilo Hagendorff. 2022. Blind spots in AI ethics. AI and Ethics 2, 4 (Nov. 2022), 851–867. https://doi.org/10.1007/s43681-021-00122-8
  42. Paul Henman. 2020. Improving public services using artificial intelligence: possibilities, pitfalls, governance. Asia Pacific Journal of Public Administration 42, 4 (Oct. 2020), 209–221. https://doi.org/10.1080/23276665.2020.1816188
  43. Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qualitative Health Research 27, 4 (March 2017), 591–608. https://doi.org/10.1177/1049732316665344
  44. Das AMS-Arbeitsmarktchancen-Modell. Technical concept. Synthesis Forschung, Vienna.
  45. How Different Groups Prioritize Ethical Values for Responsible AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 310–323. https://doi.org/10.1145/3531146.3533097
  46. Anne Kaun and Gabriela Taranu. 2020. Automating Society Report 2020 / Sweden. https://automatingsociety.algorithmwatch.org/report2020/sweden/
  47. Frank C. Keil. 2003. Folkscience: coarse interpretations of a complex reality. Trends in Cognitive Sciences 7, 8 (Aug. 2003), 368–373. https://doi.org/10.1016/S1364-6613(03)00158-X
  48. Frank C. Keil. 2006. Explanation and Understanding. Annual Review of Psychology 57, 1 (2006), 227–254. https://doi.org/10.1146/annurev.psych.57.102904.190100 arXiv:https://doi.org/10.1146/annurev.psych.57.102904.190100 PMID: 16318595.
  49. Frank C. Keil. 2021. The Challenges and Benefits of Mechanistic Explanation in Folk Scientific Understanding. In Advances in experimental philosophy of science (paperback edition ed.), Daniel A. Wilkenfeld and Richard Samuels (Eds.). Bloomsbury Academic, London New York Oxford New Delhi Sydney, 41–56.
  50. Professional physical scientists display tenacious teleological tendencies: Purpose-based reasoning as a cognitive default. Journal of Experimental Psychology: General 142, 4 (2013), 1074–1083. https://doi.org/10.1037/a0030399
  51. Do stakeholder needs differ? - Designing stakeholder-tailored Explainable Artificial Intelligence (XAI) interfaces. International Journal of Human-Computer Studies 181 (2024), 103160. https://doi.org/10.1016/j.ijhcs.2023.103160
  52. ”Help Me Help the AI”: Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 250, 17 pages. https://doi.org/10.1145/3544548.3581001
  53. When Do People Want AI to Make Decisions?. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans, LA, USA) (AIES ’18). Association for Computing Machinery, New York, NY, USA, 204–209. https://doi.org/10.1145/3278721.3278752
  54. Viktoriya Kutsarova. 2020. Managing Alarming Situations with Mobile Crowdsensing Systems and Wearable Devices. KTH Royal Institute of Technology. School of Electrical Engineering and Computer Science. https://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A1465193&dswid=-5365
  55. Maciej Kuziemski and Gianluca Misuraca. 2020. AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy 44, 6 (2020), 101976. https://doi.org/10.1016/j.telpol.2020.101976 Artificial intelligence, economy and society.
  56. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (July 2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
  57. Richard R. Lau and David P. Redlawsk. 2001. Advantages and Disadvantages of Cognitive Heuristics in Political Decision Making. American Journal of Political Science 45, 4 (Oct. 2001), 951. https://doi.org/10.2307/2669334
  58. WeBuildAI: Participatory Framework for Algorithmic Governance. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1–35. https://doi.org/10.1145/3359283
  59. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590
  60. Question-Driven Design Process for Explainable AI User Experiences. arXiv:2104.03483 [cs.HC]
  61. Brian Y. Lim and Anind K. Dey. 2009. Assessing Demand for Intelligibility in Context-Aware Applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (Orlando, Florida, USA) (UbiComp ’09). Association for Computing Machinery, New York, NY, USA, 195–204. https://doi.org/10.1145/1620545.1620576
  62. Who Should Pay When Machines Cause Harm? Laypeople’s Expectations of Legal Damages for Machine-Caused Harm. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 236–246. https://doi.org/10.1145/3593013.3593992
  63. Tania Lombrozo and Nicholas Z. Gwynne. 2014. Explanation and inference: mechanistic and functional explanations guide property generalization. Frontiers in Human Neuroscience 8 (Sept. 2014). https://doi.org/10.3389/fnhum.2014.00700
  64. Tania Lombrozo and Daniel Wilkenfeld. 2019. Mechanistic versus Functional Understanding. In Varieties of Understanding. Oxford University Press, 209–230. https://doi.org/10.1093/oso/9780190860974.003.0011
  65. Duri Long and Brian Magerko. 2020. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–16. https://doi.org/10.1145/3313831.3376727
  66. The Role of Collaboration, Creativity, and Embodiment in AI Learning Experiences. In Creativity and Cognition. ACM, Virtual Event Italy, 1–10. https://doi.org/10.1145/3450741.3465264
  67. Paola Lopez. 2019. Reinforcing Intersectional Inequality via the AMS Algorithm in Austria. (2019), 21.
  68. AI Regulation Is (Not) All You Need. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1267–1279. https://doi.org/10.1145/3593013.3594079
  69. Arthur Lupia. 2016. Uninformed: why people know so little about politics and what we can do about it. Oxford University Press, New York.
  70. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376445
  71. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  72. Tim Miller. 2023. Explainable AI is Dead, Long Live Explainable AI! Hypothesis-Driven Decision Support Using Evaluative AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 333–342. https://doi.org/10.1145/3593013.3594001
  73. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI. arXiv:1902.01876 [cs.AI]
  74. Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, Chicago IL USA, 1198–1212. https://doi.org/10.1145/3593013.3594074
  75. AI Literacy: Definition, Teaching, Evaluation and Ethical Issues. Proceedings of the Association for Information Science and Technology 58, 1 (Oct. 2021), 504–509. https://doi.org/10.1002/pra2.487
  76. Profiling the Unemployed in Poland: Social and Political Implications of Algorithmic Decision Making. (10 2015).
  77. High-Level Expert Group on Artificial Intelligence. 2019. Ethics guidelines for trustworthy AI. (April 2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  78. Andrés Páez. 2019. The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines 29, 3 (01 Sep 2019), 441–459. https://doi.org/10.1007/s11023-019-09502-w
  79. The Role of Explainable AI in the Context of the AI Act. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1139–1150. https://doi.org/10.1145/3593013.3594069
  80. Leonid Rozenblit and Frank Keil. 2002. The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive Science 26, 5 (2002), 521–562. https://doi.org/10.1207/s15516709cog2605_1 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog2605_1
  81. Gordon Rugg and Marian Petre. 2007. A gentle guide to research methods. Open University Press.
  82. Ny teknik i välfärden - vår tids välfärdskliv (New Technologies in Welfare – The Welfare Leap of Our Times). Technical Report. Svenskt Näringsliv. https://www.svensktnaringsliv.se/sakomraden/valfard-och-offentlig-sektor/ny-teknik-i-valfarden-var-tids-valfardskliv_1003696.html
  83. Testing the test: Are exams measuring understanding? Biochemistry and Molecular Biology Education 47, 3 (May 2019), 296–302. https://doi.org/10.1002/bmb.21231
  84. On the Impact of Explanations on Understanding of Algorithmic Decision-Making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 959–970. https://doi.org/10.1145/3593013.3594054
  85. Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making. https://doi.org/10.48550/arXiv.2305.16700 arXiv:2305.16700 [cs.HC]
  86. “There Is Not Enough Information”: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 1616–1628. https://doi.org/10.1145/3531146.3533218
  87. Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 2138–2148. https://doi.org/10.1145/3531146.3534631
  88. Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 1–22. https://doi.org/10.1145/3415224
  89. Donghee Shin. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146 (2021), 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  90. Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions. International Journal of Human–Computer Interaction (July 2022), 1–28. https://doi.org/10.1080/10447318.2022.2095705
  91. Metrics, Explainability and the European AI Act Proposal. J 5, 1 (2022), 126–138. https://doi.org/10.3390/j5010010
  92. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. ACM, College Station TX USA, 109–119. https://doi.org/10.1145/3397481.3450662
  93. Trustworthy Artificial Intelligence. Electronic Markets 31, 2 (June 2021), 447–464. https://doi.org/10.1007/s12525-020-00441-4
  94. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (April 2018), 1–14. https://doi.org/10.1145/3173574.3174014
  95. Michael Veale and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act. preprint. SocArXiv. https://doi.org/10.31235/osf.io/38p5f
  96. Kate Vredenburgh. 2022. The Right to Explanation*. Journal of Political Philosophy 30, 2 (June 2022), 209–229. https://doi.org/10.1111/jopp.12262
  97. Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (IUI ’21). Association for Computing Machinery, New York, NY, USA, 318–328. https://doi.org/10.1145/3397481.3450650
  98. Grant P. Wiggins and Jay McTighe. 2005. Understanding by design (expanded 2nd ed ed.). Association for Supervision and Curriculum Development, Alexandria, VA. OCLC: 60756429.
  99. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–14. https://doi.org/10.1145/3173574.3174230
  100. Linda Zagzebski. 2019. Toward a Theory of Understanding. In Varieties of Understanding. Oxford University Press, 123–136. https://doi.org/10.1093/oso/9780190860974.003.0007
  101. John Zerilli. 2022. Explaining Machine Learning Decisions. Philosophy of Science 89, 1 (2022), 1–19. https://doi.org/10.1017/psa.2021.13
  102. Theresa Züger and Hadi Asghari. 2023. AI for the public. How public interest theory shifts the discourse on AI. AI & SOCIETY 38, 2 (April 2023), 815–828. https://doi.org/10.1007/s00146-022-01480-5
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Timothée Schmude (6 papers)
  2. Laura Koesten (21 papers)
  3. Torsten Möller (29 papers)
  4. Sebastian Tschiatschek (43 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets