Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially (2302.04720v3)

Published 9 Feb 2023 in cs.HC

Abstract: Technologies increasingly mimic human-like social behaviours. Beyond prototypical conversational agents like chatbots, this also applies to basic automated systems like app notifications or self-checkout machines that address or 'talk to' users in everyday situations. Whilst early evidence suggests social cues may enhance user experience, we lack a good understanding of when, and why, their use may be inappropriate. Building on a survey of English-speaking smartphone users (n=80), we conducted experience sampling, interview, and workshop studies (n=11) to elicit people's attitudes and preferences regarding how automated systems talk to them. We thematically analysed examples of phrasings/conduct participants disliked, the reasons they gave, and what they would prefer instead. One category of inappropriate behaviour we identified regards the use of social cues as tools for manipulation. We describe four unwanted tactics interfaces use: agents playing on users' emotions (e.g., guilt-tripping or coaxing them), being pushy, `mothering' users, or being passive-aggressive. Another category regards pragmatics: personal or situational factors that can make a seemingly friendly or helpful utterance come across as rude, tactless, or invasive. These include failing to account for relevant contextual particulars (e.g., embarrassing users in public); expressing obviously false personalised care; or treating a user in ways that they find inappropriate for the system's role or the nature of their relationship. We discuss these behaviours in terms of an emerging 'social' class of dark and anti-patterns. Drawing from participant recommendations, we offer suggestions for improving how interfaces treat people in interactions, including broader normative reflections on treating users respectfully.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (65)
  1. Scoping review on the use of socially assistive robot technology in elderly care. BMJ open 8, 2 (2018), e018815.
  2. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300233
  3. Dina Babushkina. 2022. What Does It Mean for a Robot to Be Respectful? Techné: Research in Philosophy and Technology 26, 1 (2022), 1–30.
  4. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
  5. Cookie Disclaimers: Impact of Design and Users’ Attitude. In Proceedings of the 17th International Conference on Availability, Reliability and Security (Vienna, Austria) (ARES ’22). Association for Computing Machinery, New York, NY, USA, Article 12, 20 pages. https://doi.org/10.1145/3538969.3539008
  6. This Website Uses Nudging: MTurk Workers’ Behaviour on Cookie Consent Notices. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 346 (oct 2021), 22 pages. https://doi.org/10.1145/3476087
  7. Antipatterns: a compendium of bad practices in software development processes. International Journal of Interactive Multimedia and Artificial Intelligence 1, 4 (2011), 41–46.
  8. Tara Brabazon. 2015. Digital fitness: Self-monitored fitness and the commodification of movement. Communication, Politics & Culture 48, 2 (2015), 1–23.
  9. Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health 11, 4 (2019), 589–597.
  10. Social Robotics. In Springer Handbook of Robotics, Bruno Siciliano and Oussama Khatib (Eds.). Springer International Publishing, Cham, 1935–1972. https://doi.org/10.1007/978-3-319-32552-1_72
  11. AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis (1st ed.). John Wiley & Sons, Inc., USA.
  12. 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300733
  13. Grounded theory research: A design framework for novice researchers. SAGE open medicine 7 (2019), 2050312118822927.
  14. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems 92 (2019), 539–548.
  15. Thematic analysis. Qualitative psychology: A practical guide to research methods 222, 2015 (2015), 248.
  16. Gregory Conti and Edward Sobiesk. 2010. Malicious Interface Design: Exploiting the User. In Proceedings of the 19th International Conference on World Wide Web (Raleigh, North Carolina, USA) (WWW ’10). Association for Computing Machinery, New York, NY, USA, 271–280. https://doi.org/10.1145/1772690.1772719
  17. Lorrie Faith Cranor. 2022. Cookie Monster. Commun. ACM 65, 7 (jun 2022), 30–32. https://doi.org/10.1145/3538639
  18. The shaping of social perception by stimulus and knowledge cues to human animacy. Philosophical Transactions of the Royal Society B: Biological Sciences 371, 1686 (2016), 20150075.
  19. Laura Dodsworth. 2021. A state of fear: How the UK government weaponised fear during the Covid-19 pandemic. Pinter & Martin, London, UK.
  20. MINDSPACE: influencing behaviour for public policy. Technical Report. Institute of Government.
  21. Judith Donath. 2020. 5253Ethical Issues in Our Relationship with Artificial Entities. In The Oxford Handbook of Ethics of AI. Oxford University Press, Oxford, UK. https://doi.org/10.1093/oxfordhb/9780190067397.013.3 arXiv:https://academic.oup.com/book/0/chapter/290656277/chapter-ag-pdf/44521956/book_34287_section_290656277.ag.pdf
  22. Asbjørn Følstad and Petter Bae Brandtzaeg. 2020. Users’ experiences with chatbots: findings from a questionnaire study. Quality and User Experience 5, 1 (2020), 1–14.
  23. Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication 1, 1 (2020), 5.
  24. The Dark (Patterns) Side of UX Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174108
  25. Dark Patterns in Proxemic Interactions: A Critical Perspective. In Proceedings of the 2014 Conference on Designing Interactive Systems (Vancouver, BC, Canada) (DIS ’14). Association for Computing Machinery, New York, NY, USA, 523–532. https://doi.org/10.1145/2598510.2598541
  26. Mental models and expectation violations in conversational AI interactions. Decision Support Systems 144 (2021), 113515.
  27. Identifying User Needs for Advertising Controls on Facebook. Proc. ACM Hum.-Comput. Interact. 6, CSCW1, Article 59 (apr 2022), 42 pages. https://doi.org/10.1145/3512906
  28. Rhonda Hadi. 2019. When Humanizing Customer Service Chatbots Might Backfire. NIM Marketing Intelligence Review 11, 2 (2019), 30–35. https://doi.org/10.2478/nimmir-2019-0013
  29. Can I Talk to You about Your Social Needs? Understanding Preference for Conversational User Interface in Health. In Proceedings of the 3rd Conference on Conversational User Interfaces (Bilbao (online), Spain) (CUI ’21). Association for Computing Machinery, New York, NY, USA, Article 4, 10 pages. https://doi.org/10.1145/3469595.3469599
  30. Cherie Lacey and Catherine Caudwell. 2019. Cuteness as a ‘Dark Pattern’ in Home Robots. In 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, Daegu, Korea (South), 374–381. https://doi.org/10.1109/HRI.2019.8673274
  31. Social robotics, elderly care, and human dignity: a recognition-theoretical approach. In What social robots can and should do. IOS Press, Amsterdam, The Netherlands, 155–163.
  32. Eun-Ju Lee. 2010. The more humanlike, the better? How speech type and users’ cognitive style affect social responses to computers. Computers in Human Behavior 26, 4 (2010), 665–672.
  33. Thomas C Leonard. 2008. Richard H. Thaler, Cass R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness. Yale University Press, New Haven, CT.
  34. Assessing Human-AI Interaction Early through Factorial Surveys: A Study on the Guidelines for Human-AI Interaction. ACM Trans. Comput.-Hum. Interact. 30, 5, Article 69 (sep 2023), 45 pages. https://doi.org/10.1145/3511605
  35. Norman Long. 2001. Development sociology: actor perspectives. Routledge, New York, NY.
  36. The Role of Social Dialogue and Errors in Robots. In Proceedings of the 5th International Conference on Human Agent Interaction (Bielefeld, Germany) (HAI ’17). Association for Computing Machinery, New York, NY, USA, 431–433. https://doi.org/10.1145/3125739.3132617
  37. Ewa Luger and Abigail Sellen. 2016. ”Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 5286–5297. https://doi.org/10.1145/2858036.2858288
  38. What Can CHI Do About Dark Patterns?. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 102, 6 pages. https://doi.org/10.1145/3411763.3441360
  39. How the Design of YouTube Influences User Sense of Agency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 368, 17 pages. https://doi.org/10.1145/3411764.3445467
  40. Diana MacDonald. 2019. Anti-patterns and dark patterns. Apress, Berkeley, CA, 193–221. https://doi.org/10.1007/978-1-4842-4938-3_5
  41. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 81 (nov 2019), 32 pages. https://doi.org/10.1145/3359183
  42. What Makes a Dark Pattern… Dark? Design Attributes, Normative Considerations, and Measurement Methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 360, 18 pages. https://doi.org/10.1145/3411764.3445610
  43. Jingbo Meng and Yue (Nancy) Dai. 2021. Emotional Support from AI Chatbots: Should a Supportive Partner Self-Disclose or Not? Journal of Computer-Mediated Communication 26, 4 (05 2021), 207–222. https://doi.org/10.1093/jcmc/zmab005 arXiv:https://academic.oup.com/jcmc/article-pdf/26/4/207/40342390/zmab005.pdf
  44. Impulse Buying: Design Practices and Consumer Needs. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300472
  45. Dark Patterns: Past, Present, and Future: The evolution of tricky user interfaces. Queue 18, 2 (2020), 67–92.
  46. Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues 56, 1 (2000), 81–103.
  47. Computers Are Social Actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, Massachusetts, USA) (CHI ’94). Association for Computing Machinery, New York, NY, USA, 72–78. https://doi.org/10.1145/191666.191703
  48. Mixing qualitative methods versus methodologies: A critical reflection on communication and power in inpatient care. Counselling and Psychotherapy Research 21, 1 (2021), 66–76.
  49. Exploring Deceptive Design Patterns in Voice Interfaces. In Proceedings of the 2022 European Symposium on Usable Security (Karlsruhe, Germany) (EuroUSEC ’22). Association for Computing Machinery, New York, NY, USA, 64–78. https://doi.org/10.1145/3549015.3554213
  50. Michelle O’Reilly and Nikki Kiyimba. 2015. Advanced qualitative research: A guide to using theory. Sage, New York, NY.
  51. Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies. arXiv:2308.03188 [cs.CL]
  52. Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. Cambridge university press, Cambridge, UK.
  53. When do we need a human? Anthropomorphic design and trustworthiness of conversational agents. In SIGHCI 2017 Proceedings. Association for Information Systems, Seoul, Korea, 15 pages.
  54. Naveen Shamsudhin and Fabrice Jotterand. 2021. Social Robots and Dark Patterns: Where Does Persuasion End and Deception Begin? In Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues, Fabrice Jotterand and Marcello Ienca (Eds.). Springer International Publishing, Cham, 89–110. https://doi.org/10.1007/978-3-030-74188-4_7
  55. Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 723–741. https://doi.org/10.1145/3600211.3604673
  56. Dark patterns in online shopping: do they work and can nudges help mitigate impulse buying? Behavioural Public Policy (2022), 1–27. https://doi.org/10.1017/bpp.2022.11
  57. Lucy Suchman. 2007. Human-machine reconfigurations: Plans and situated actions. Cambridge University Press, Cambridge, UK.
  58. Nina Svenningsson and Montathar Faraon. 2020. Artificial Intelligence in Conversational Agents: A Study of Factors Related to Perceived Humanness in Chatbots. In Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference (Kobe, Japan) (AICCC 2019). Association for Computing Machinery, New York, NY, USA, 151–161. https://doi.org/10.1145/3375959.3375973
  59. User Expectations of Conversational Chatbots Based on Online Reviews. Association for Computing Machinery, New York, NY, USA, 1481–1491.
  60. Modeling the Engagement-Disengagement Cycle of Compulsive Phone Use. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300542
  61. Amos Tversky and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science 185, 4157 (1974), 1124–1131. https://doi.org/10.1126/science.185.4157.1124 arXiv:https://www.science.org/doi/pdf/10.1126/science.185.4157.1124
  62. The experience sampling method on mobile devices. ACM Computing Surveys (CSUR) 50, 6 (2017), 1–40.
  63. Human-AI Interaction: Intermittent, Continuous, and Proactive. Interactions 28, 6 (nov 2021), 67–71. https://doi.org/10.1145/3486941
  64. Taxonomy of Risks Posed by Language Models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 214–229. https://doi.org/10.1145/3531146.3533088
  65. Polls, Clickbait, and Commemorative $2 Bills: Problematic Political Advertising on News and Media Websites around the 2020 U.S. Elections. In Proceedings of the 21st ACM Internet Measurement Conference (Virtual Event) (IMC ’21). Association for Computing Machinery, New York, NY, USA, 507–525. https://doi.org/10.1145/3487552.3487850
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Lize Alberts (6 papers)
  2. Ulrik Lyngs (13 papers)
  3. Max Van Kleek (36 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.