Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Characterizing and modeling harms from interactions with design patterns in AI interfaces (2404.11370v3)

Published 17 Apr 2024 in cs.HC, cs.AI, and cs.CY

Abstract: The proliferation of applications using AI systems has led to a growing number of users interacting with these systems through sophisticated interfaces. Human-computer interaction research has long shown that interfaces shape both user behavior and user perception of technical capabilities and risks. Yet, practitioners and researchers evaluating the social and ethical risks of AI systems tend to overlook the impact of anthropomorphic, deceptive, and immersive interfaces on human-AI interactions. Here, we argue that design features of interfaces with adaptive AI systems can have cascading impacts, driven by feedback loops, which extend beyond those previously considered. We first conduct a scoping review of AI interface designs and their negative impact to extract salient themes of potentially harmful design patterns in AI interfaces. Then, we propose Design-Enhanced Control of AI systems (DECAI), a conceptual model to structure and facilitate impact assessments of AI interface designs. DECAI draws on principles from control systems theory -- a theory for the analysis and design of dynamic physical systems -- to dissect the role of the interface in human-AI systems. Through two case studies on recommendation systems and conversational LLM systems, we show how DECAI can be used to evaluate AI interface designs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (119)
  1. Automating automaticity: How the context of human choice affects the extent of algorithmic bias. Technical Report. National Bureau of Economic Research.
  2. Evgeni Aizenberg and Jeroen Van Den Hoven. 2020. Designing for human rights in AI. Big Data & Society 7, 2 (2020), 2053951720949566.
  3. Lize Alberts and Max Van Kleek. 2023. Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially. arXiv preprint arXiv:2302.04720 (2023).
  4. Obead Alhadreti and Pam Mayhew. 2018. Rethinking thinking aloud: A comparison of three think-aloud protocols. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–12.
  5. Davinder K Anand. 2013. Introduction to control systems. Vol. 8. Elsevier.
  6. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & society 35 (2020), 611–623.
  7. Hilary Arksey and Lisa O’Malley. 2005. Scoping studies: towards a methodological framework. International journal of social research methodology 8, 1 (2005), 19–32.
  8. Karl Johan Åström and Richard M Murray. 2021. Feedback systems: an introduction for scientists and engineers. Princeton university press.
  9. Erica Bahnweg and Hatim Omar. 2023. Effects of TikTok on Adolescent Mental Health and Wellbeing. Dynamics of Human Health (2023). https://www.journalofhealth.co.nz/wp-content/uploads/2023/04/DHH_TikTok_Hatim.pdf
  10. Jack Bandy. 2021. Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the acm on human-computer interaction 5, CSCW1 (2021), 1–34.
  11. Liam J Bannon. 1995. From human factors to human actors: The role of psychology and human-computer interaction studies in system design. In Readings in human–computer interaction. Elsevier, 205–214.
  12. The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 173–184.
  13. Karen Boyd. 2022. Designing up with value-sensitive design: Building a field guide for ethical ML development. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2069–2082.
  14. Algorithm-mediated social learning in online social networks. Preprint at OSF preprints. https://doi. org/10.31219/osf. io/yw5ah (2023).
  15. Multi-controller multi-objective locomotion planning for legged robots. In 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 4714–4721.
  16. Martin Brenncke. 2023. A Theory of Exploitation for Consumer Law: Online Choice Architectures, Dark Patterns, and Autonomy Violations. Journal of Consumer Policy (2023), 1–38.
  17. Harms from Increasingly Agentic Algorithmic Systems. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 651–666.
  18. KinVoices: Using voices of friends and family in voice interfaces. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
  19. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109 (2023).
  20. “Are You Still Watching?”: Exploring Unintended User Behaviors and Dark Patterns on Video Streaming Platforms. In Designing Interactive Systems Conference. 776–791.
  21. Spread of misinformation on social media: What contributes to it and how to combat it. Computers in Human Behavior 141 (2023), 107643.
  22. Deceptive Design Patterns in Safety Technologies: A Case Study of the Citizen App. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
  23. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems.. In IUI workshops, Vol. 2327.
  24. What makes a good conversation? Challenges in designing truly conversational agents. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–12.
  25. Victoria Clarke and Virginia Braun. 2017. Thematic analysis. The journal of positive psychology 12, 3 (2017), 297–298.
  26. THE CONVERSATION. 2023. What we can learn from ChatGPT’s first year? https://www.fastcompany.com/90990376/what-we-can-learn-from-chatgpts-first-year
  27. Erman Coskun and Martha Grabowski. 2005. Impacts of user interface complexity on user acceptance and performance in safety-critical systems. Journal of Homeland Security and Emergency Management 2, 1 (2005).
  28. Palle Dahlstedt. 2018. Action and Perception. The Oxford Handbook of Algorithmic Music (2018), 41.
  29. Jenny L Davis. 2020. How artifacts afford: The power and politics of everyday things. MIT Press.
  30. Jenny L Davis. 2023. ‘Affordances’ for Machine Learning. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 324–332.
  31. Jenny L Davis and James B Chouinard. 2016. Theorizing affordances: From request to refuse. Bulletin of science, technology & society 36, 4 (2016), 241–248.
  32. UI dark patterns and where to find them: a study on mobile applications and user perception. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
  33. A multi-objective interactive system for adaptive traffic control. European Journal of Operational Research 244, 2 (2015), 601–610.
  34. European Commission. 2019. Ethics Guidelines for Trustworthy AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
  35. European Commission. 2023a. Data Act Proposal. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN
  36. European Commission. 2023b. Proposal for a Regulation Of The European Parliment And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
  37. European Parliament and the Council of the European Union. 2023a. Digital Markets Act. https://eur-lex.europa.eu/legal-content/en/TXT/?uri=COM%3A2020%3A842%3AFIN
  38. European Parliament and the Council of the European Union. 2023b. Digital Services Act. https://eur-lex.europa.eu/eli/reg/2022/2065/oj
  39. Feedback control of dynamic systems. Vol. 4. Prentice hall Upper Saddle River.
  40. Robert Godwin-Jones. 2008. Mobile-computing trends: lighter, faster, smarter. (2008).
  41. Scott A Goodstein. 2021. When the cat’s away: Techlash, loot boxes, and regulating” dark patterns” in the video game industry’s monetization strategies. U. Colo. L. Rev. 92 (2021), 285.
  42. End user accounts of dark patterns as felt manipulation. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
  43. The dark (patterns) side of UX design. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–14.
  44. Dark patterns and the legal requirements of consent banners: An interaction criticism perspective. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
  45. Ben Green. 2021. The contestation of tech ethics: A sociotechnical approach to technology ethics in practice. Journal of Social Computing 2, 3 (2021), 209–225.
  46. Ben Green and Yiling Chen. 2019. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the conference on fairness, accountability, and transparency. 90–99.
  47. Seok Hyun Gwon and Suyong Jeong. 2018. Concept analysis of impressionability among adolescents and young adults. Nursing open 5, 4 (2018), 601–610.
  48. Roger W Haines and Douglas C Hittle. 2006. Control systems for heating, ventilating, and air conditioning. Springer Science & Business Media.
  49. Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations. Proceedings of the National Academy of Sciences 120, 50 (2023), e2213020120.
  50. Rex Hartson. 2003. Cognitive, physical, sensory, and functional affordances in interaction design. Behaviour & information technology 22, 5 (2003), 315–338.
  51. Erin Harwood. 2021. TikTok, identity struggles and mental health issues: How are the youth of today coping. In Identity and Online Advocacy Conference.
  52. Examining the consumption of radical content on YouTube. Proceedings of the National Academy of Sciences 118, 32 (2021), e2101967118.
  53. Angel Hsing-Chi Hwang and Andrea Stevenson Won. 2022. Ai in your mind: Counterbalancing perceived agency and experience in human-ai interaction. In Chi conference on human factors in computing systems extended abstracts. 1–10.
  54. Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations. In Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems. 326–334.
  55. IEEE. 2019. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. https://ethicsinaction.ieee.org/.
  56. Amnesty International. 2023. Global: TikTok’s “for you” feed risks pushing children and young people towards harmful mental health content. https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/
  57. Daniel Kahneman. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux.
  58. Amba Kak and Sarah Myers West. 2023. Data minimization as a tool for AI accountability. https://ainowinstitute.org/spotlight/data-minimization
  59. Kay Kender and Christopher Frauenberger. 2022. The Shape of Social Media: Towards Addressing (Aesthetic) Design Power. In Designing Interactive Systems Conference. 365–376.
  60. Longitudinal studies in HCI research: a review of CHI publications from 1982–2019. Springer.
  61. James Foster Knutson. 1997. The effect of the user interface design on adoption of new technology. Georgia Institute of Technology.
  62. Understanding Dark Patterns in Home IoT Devices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–27.
  63. Klaus Kremer and Saskia M van Manen. 2023. Design guidelines to improve user experience (UX) in an emergency: On the importance of affordances, signifiers and feedback. In Design for Emergency Management. Routledge, 49–68.
  64. Cherie Lacey and Catherine Caudwell. 2019. Cuteness as a ‘dark pattern’in home robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 374–381.
  65. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021).
  66. Nathan Lambert. 2023. The interface era of AI. https://www.interconnects.ai/p/the-interface-era-of-ai
  67. From ChatGPT to FactGPT: A participatory design study to mitigate the effects of large language model hallucinations on users. In Proceedings of Mensch und Computer 2023. 81–90.
  68. Cooperative control of multi-agent systems: optimal and adaptive design approaches. Springer Science & Business Media.
  69. Q Vera Liao and S Shyam Sundar. 2022. Designing for responsible trust in AI systems: A communication perspective. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1257–1268.
  70. Choice architecture and design with intent. In 9th Bi-annual International Conference on Naturalistic Decision Making (NDM9). BCS Learning & Development.
  71. Caught in a feedback loop? Algorithmic personalization and digital traces. AoIR Selected Papers of Internet Research (2016).
  72. Sergey E Lyshevski. 2001. Control systems theory with engineering applications. Springer Science & Business Media.
  73. Feedback loop and bias amplification in recommender systems. In Proceedings of the 29th ACM international conference on information & knowledge management. 2145–2148.
  74. Slowing it Down: Towards Facilitating Interpersonal Mindfulness in Online Polarizing Conversations Over Social Media. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–27.
  75. Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–32.
  76. What makes a dark pattern… dark? design attributes, normative considerations, and measurement methods. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–18.
  77. J Nathan Matias. 2023. Humans and algorithms work together—so study them together. Nature 617, 7960 (2023), 248–251.
  78. J Nathan Matias and Lucas Wright. 2022. Impact Assessment of Human-Algorithm Feedback Loops. https://just-tech.ssrc.org/field-reviews/impact-assessment-of-human-algorithm-feedback-loops/
  79. On faithfulness and factuality in abstractive summarization. arXiv preprint arXiv:2005.00661 (2020).
  80. Algorithmic impact assessments and accountability: The co-construction of impacts. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 735–746.
  81. About Engaging and Governing Strategies: A Thematic Analysis of Dark Patterns in Social Networking Services. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
  82. Defining and Identifying Attention Capture Deceptive Designs in Digital Interfaces. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.
  83. Interfaces for explanations in human-AI interaction: proposing a design evaluation approach. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
  84. Bence Nanay. 2013. Between perception and action. Oxford University Press, USA.
  85. Arvind Narayanan. 2022. Tiktok’s Secret Sauce. https://knightcolumbia.org/blog/tiktoks-secret-sauce
  86. (Re) Design to Mitigate Political Polarization: Reflecting Habermas’ ideal communication space in the United States of America and Finland. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1–25.
  87. Norman S. Nise. 2010. Control Systems Engineering. John Wiley & Sons.
  88. Donald A. Norman. 1988. The Design of Everyday Things. Basic Books.
  89. What is an affordance? 40 years later. Neuroscience & Biobehavioral Reviews 77 (2017), 403–417.
  90. Exploring deceptive design patterns in voice interfaces. In Proceedings of the 2022 European Symposium on Usable Security. 64–78.
  91. Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clinical child psychology and psychiatry 23, 4 (2018), 601–613.
  92. Guglielmo Papagni and Sabine Koeszegi. 2021. A pragmatic approach to the intentional stance semantic, empirical and ethical considerations for the design of artificial agents. Minds and Machines 31 (2021), 505–534.
  93. Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence 5, 10 (2023), 1076–1086.
  94. You Are What You Write: Preserving Privacy in the Era of Large Language Models. arXiv preprint arXiv:2204.09391 (2022).
  95. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
  96. Sunil Ramlochan. 2023. Beyond the bot - why ChatGPT’s interface was the real innovation. https://promptengineering.org/beyond-the-bot-why-chatgpts-interface-was-the-real-innovation/
  97. Models for understanding and quantifying feedback in societal systems. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1765–1775.
  98. Auditing radicalization pathways on YouTube. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 131–141.
  99. Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behavior. Multimodal Technologies and Interaction 2, 4 (2018), 62.
  100. Anthropomorphism in AI. AJOB neuroscience 11, 2 (2020), 88–95.
  101. John W Satzinger and Lorne Olfman. 1998. User interface consistency across end-user applications: the effects on mental models. Journal of Management Information Systems 14, 4 (1998), 167–193.
  102. Ashley Scarlett and Martin Zeilinger. 2019. Rethinking affordance. Media Theory 3, 1 (2019), 1–48.
  103. Don’t Let Netflix Drive the Bus: User’s Sense of Agency Over Time and Content Choice on Netflix. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–32.
  104. Voxpop: An experimental social media platform for calibrated (mis) information discourse. In New Security Paradigms Workshop. 88–107.
  105. Practices for Governing Agentic AI Systems. ([n. d.]).
  106. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 723–741.
  107. Circumvention by design-dark patterns in cookie consent for online news outlets. In Proceedings of the 11th nordic conference on human-computer interaction: Shaping experiences, shaping society. 1–12.
  108. Evaluating the Social Impact of Generative AI Systems in Systems and Society. arXiv preprint arXiv:2306.05949 (2023).
  109. Francesco Striano. 2023. Alert! Ideological Interfaces, TikTok, and the Meme Teleology. Techne: Research in Philosophy & Technology 27, 2 (2023).
  110. Dark patterns in online shopping: Of sneaky tricks, perceived annoyance and respective brand trust. In International conference on human-computer interaction. Springer, 143–155.
  111. Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 214–229.
  112. Wen Wen and Hiroshi Imamizu. 2022. The sense of agency in perception, behaviour and human–machine interactions. Nature Reviews Psychology 1, 4 (2022), 211–222.
  113. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
  114. Beyond clicks: dwell time for personalization. In Proceedings of the 8th ACM Conference on Recommender systems. 113–120.
  115. Dark patterns in the design of games. In Foundations of Digital Games 2013.
  116. What is there to fear? Understanding multi-dimensional fear of AI from a technological affordance perspective. International Journal of Human–Computer Interaction (2023), 1–18.
  117. ” It’s a Fair Game”, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents. arXiv preprint arXiv:2309.11653 (2023).
  118. Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology (2023).
  119. Exploring the Early Adoption of Open AI among Laypeople and Technical Professionals: An Analysis of Twitter Conversations on# ChatGPT and# GPT3. International Journal of Human–Computer Interaction (2023), 1–12.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Lujain Ibrahim (8 papers)
  2. Luc Rocher (8 papers)
  3. Ana Valdivia (6 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.