Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success (2402.14728v2)

Published 22 Feb 2024 in cs.HC and cs.AI

Abstract: The evolution of AI is set to profoundly reshape the future. The European Union, recognizing this impending prominence, has enacted the AI Act, regulating market access for AI-based systems. A salient feature of the Act is to guard democratic and humanistic values by focusing regulation on transparency, explainability, and the human ability to understand and control AI systems. Hereby, the EU AI Act does not merely specify technological requirements for AI systems. The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development. Without robust methods to assess AI systems and their effect on individuals and society, the EU AI Act may lead to repeating the mistakes of the General Data Protection Regulation of the EU and to rushed, chaotic, ad-hoc, and ambiguous implementation, causing more confusion than lending guidance. Moreover, determined research activities in Human-AI interaction will be pivotal for both regulatory compliance and the advancement of AI in a manner that is both ethical and effective. Such an approach will ensure that AI development aligns with human values and needs, fostering a technology landscape that is innovative, responsible, and an integral part of our society.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. From attribution maps to human-understandable explanations through concept relevance propagation. Nature Machine Intelligence, 5(9):1006–1019.
  2. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138–52160.
  3. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, volume 7, pages 2–11.
  4. What do you mean? a review on recovery strategies to overcome conversational breakdowns of conversational agents. In International Conference on Information Systems (ICIS), pages 1–17.
  5. What happens when decision support systems fail?—the importance of usability on performance in erroneous systems. Behaviour & Information Technology, 38(12):1225–1242.
  6. Campbell, D. T. (2017). Ethnocentrism of disciplines and the fish-scale model of omniscience. In Interdisciplinary relationships in the social sciences, pages 328–348. Routledge.
  7. The economic costs of the european union’s cookie notification policy. The Information Technology and Innovation Foundation, pages 1–11.
  8. I think I get your point, AI! the illusion of explanatory depth in explainable AI. In 26th International Conference on Intelligent User Interfaces, pages 307–317.
  9. Cockton, G. (2004). Value-centred HCI. In Proceedings of the third Nordic conference on Human-computer interaction, pages 149–160.
  10. Cockton, G. (2006). Designing worth is worth designing. In Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles, pages 165–174.
  11. Cooper, A. (2021). Explaining machine learning models: A non-technical guide to interpreting shap analyses.
  12. Ganslider: How users control generative models for images using multiple sliders with and without feedforward information. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–15.
  13. Explainability pitfalls: Beyond dark patterns in explainable ai. arXiv preprint arXiv:2109.12480.
  14. The impact of placebic explanations on trust in intelligent systems. In Extended abstracts of the 2019 CHI conference on human factors in computing systems, pages 1–6.
  15. Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human factors, 37(1):32–64.
  16. European Commission (2024). Artifcial Intelligence Act. REGULATION of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, eur-lex - 52021pc0206 - en - eur-lex.
  17. Explanations that backfire: Explainable artificial intelligence can cause information overload. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 44.
  18. A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction, 35(6):456–467.
  19. Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-ai performance. Frontiers in Computer Science, 5:1096257.
  20. Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Information Fusion, 71:28–37.
  21. Johnson-Laird, P. N. (2010). Mental models and human reasoning. Proceedings of the National Academy of Sciences, 107(43):18243–18250.
  22. Dark patterns in the wild: Review of cookie disclaimer designs on top 500 German websites. In Proceedings of the 2021 European Symposium on Usable Security, pages 1–8.
  23. Lewin, K. (1943). Psychology and the process of group living. The Journal of Social Psychology, 17(1):113–131.
  24. Miller, T. (2023). Explainable AI is dead, long live explainable AI! hypothesis-driven decision support using evaluative ai. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 333–342.
  25. Norman, D. A. (1986). Cognitive engineering. User centered system design, 31(61):2.
  26. Operators’ adaptation to imperfect automation–impact of miss-prone alarm systems on attention allocation and performance. International Journal of Human-Computer Studies, 72(10-11):772–782.
  27. Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human factors, 56(3):476–488.
  28. Performance consequences of automation-induced ’complacency’. The International Journal of Aviation Psychology, 3(1):1–23.
  29. A model for types and levels of human interaction with automation. IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans, 30(3):286–297.
  30. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
  31. Impact of automated decision aids on performance, operator behaviour and workload in a simulated supervisory control task. Ergonomics, 52(5):512–523.
  32. How do users experience traceability of AI systems? examining subjective information processing awareness in automated insulin delivery (AID) systems. ACM Transactions on Interactive Intelligent Systems, 13(4):1–34.
  33. Safe environments to understand medical AI-designing a diabetes simulation interface for users of automated insulin delivery. In International Conference on Human-Computer Interaction, pages 306–328. Springer.
  34. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6):495–504.
  35. Explaining risk perception. An evaluation of the psychometric paradigm in risk perception research, 10(2):665–612.
  36. Speith, T. (2022). A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2239–2250.
  37. Investigating explainability of generative AI for code through scenario-based design. In 27th International Conference on Intelligent User Interfaces, pages 212–228.
  38. The many Shapley values for model explanation. In International conference on machine learning, pages 9269–9278. PMLR.
  39. Should we just let the machines do it? The benefit and cost of action recommendation and action implementation automation. Human Factors, 64(7):1121–1136.
  40. A framework for explaining reliance on decision aids. International Journal of Human-Computer Studies, 71(4):410–424.
  41. Complacency and automation bias in the use of imperfect automation. Human factors, 57(5):728–739.
  42. Yeung, K. (2020). Recommendation of the council on artificial intelligence (OECD). Int. Leg. Mater., 59(1):27–34.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. André Calero Valdez (7 papers)
  2. Moreen Heine (1 paper)
  3. Thomas Franke (6 papers)
  4. Nicole Jochems (1 paper)
  5. Hans-Christian Jetter (1 paper)
  6. Tim Schrills (2 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets