Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Power and Play: Investigating "License to Critique" in Teams' AI Ethics Discussions (2403.19049v2)

Published 27 Mar 2024 in cs.CY

Abstract: Past work has sought to design AI ethics interventions--such as checklists or toolkits--to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams' past discussions, examining factors which may affect their "license to critique" in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of "scope" affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (66)
  1. Sara Ahmed. 2021. Complaint! Duke University Press.
  2. Sara Ahmed and Elaine Swan. 2006. Doing Diversity. Policy Futures in Education 4, 2 (June 2006), 96–100. https://doi.org/10.2304/pfie.2006.4.2.96
  3. Judgment Call the Game: Using Value Sensitive Design and Design Fiction to Surface Ethical Concerns Related to Technology. In Proceedings of the 2019 on Designing Interactive Systems Conference. ACM, San Diego CA USA, 421–433. https://doi.org/10.1145/3322276.3323697
  4. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In 2021 ACM Conference on Fairness, Accountability, and Transparency. 610–623.
  5. Jos Benders and Geert Van Hootegem. 1999. Teams and Their Context: Moving the Team Discussion Beyond Existing Dichotomies. Journal of Management Studies 36, 5 (1999), 609–628. https://doi.org/10.1111/1467-6486.00151
  6. Accessibility in Software Practice: A Practitioner’s Perspective. ACM Transactions on Software Engineering and Methodology 31, 4 (Oct. 2022), 1–26. https://doi.org/10.1145/3503508
  7. Fairlearn: A toolkit for assessing and improving fairness in AI. Microsoft, Tech. Rep. MSR-TR-2020-32 (2020).
  8. The Forgotten Margins of AI Ethics. In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM, Seoul Republic of Korea, 948–958. https://doi.org/10.1145/3531146.3533157
  9. Karen L. Boyd. 2021. Datasheets for Datasets Help ML Engineers Notice and Understand Ethical Issues in Training Data. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 438 (oct 2021), 27 pages. https://doi.org/10.1145/3479582
  10. Melanie S. Brucks and Jonathan Levav. 2022. Virtual Communication Curbs Creative Idea Generation. Nature 605, 7908 (May 2022), 108–112. https://doi.org/10.1038/s41586-022-04643-y
  11. License to Critique: A Communication Perspective on Sustainability Standards. Business Ethics Quarterly 27, 2 (April 2017), 239–262. https://doi.org/10.1017/beq.2016.66
  12. Victoria Clarke and Virginia Braun. 2021. Thematic analysis: a practical guide. Thematic Analysis (2021), 1–100.
  13. Kate Conger and Daisuke Wakabayashi. 2019. Google Employees Say They Faced Retaliation After Organizing Walkout. https://www.nytimes.com/2019/04/22/technology/google-walkout-employees-retaliation.html. The New York Times (22 April 2019). Accessed: 2023-06-27.
  14. Games as Speculative Design: Allowing Players to Consider Alternate Presents and Plausible Features. In Design Research Society Conference 2016. https://doi.org/10.21606/drs.2016.15
  15. On the grounds of solutionism: Ontologies of blackness and HCI. ACM Transactions on Computer-Human Interaction 30, 2 (2023), 1–17.
  16. Stanley Deetz. 1992. Democracy in an age of corporate colonization: Developments in communication and the politics of everyday life. SUNY press.
  17. Anthony Dunne and Fiona Raby. 2013. Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press. Google-Books-ID: 9gQyAgAAQBAJ.
  18. Mary Flanagan and Helen Nissenbaum. 2014. Values at Play in Digital Games. https://doi.org/10.7551/mitpress/9016.001.0001
  19. Ben Gansky and Sean McDonald. 2022. CounterFAccTual: How FAccT undermines its organizing principles. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1982–1992.
  20. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In 52nd Hawaii international conference on system sciences.
  21. To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing. arXiv preprint arXiv:2310.07715 (2023).
  22. Hugh Gusterson. 1996. Nuclear Rites: A Weapons Laboratory at the End of the Cold War. University of California Press.
  23. Donna J Haraway. 1991. Situated knowledges: The science question in feminism and the privilege of partial perspective. Simians, cyborgs, and women: The reinvention of nature (1991), 183–201.
  24. Improving fairness in machine learning systems: What do industry practitioners need?. In 2019 CHI conference on human factors in computing systems. 1–16.
  25. George P. Huber and Kyle Lewis. 2010. Cross-Understanding: Implications for Group Cognition and Performance. The Academy of Management Review 35, 1 (2010), 6–26. https://www.jstor.org/stable/27760038 Publisher: Academy of Management.
  26. Lee Humphreys. 2005. Reframing Social Groups, Closure, and Stabilization in the Social Construction of Technology. Social Epistemology 19, 2-3 (Jan. 2005), 231–253. https://doi.org/10.1080/02691720500145449 Publisher: Routledge _eprint: https://doi.org/10.1080/02691720500145449.
  27. Getting to Know You: Motivating Cross-Understanding for Improved Team and Individual Performance. Organization Science 31, 1 (Jan. 2020), 103–118. https://doi.org/10.1287/orsc.2019.1324 Publisher: INFORMS.
  28. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
  29. Khari Johnson. 2019. AI ethics is all about power. Venture Beat 1 (2019).
  30. Katherine C. Kellogg. 2009. Operating Room: Relational Spaces and Microinstitutional Change in Surgery. Amer. J. Sociology 115, 3 (Nov. 2009), 657–711. https://doi.org/10.1086/603535
  31. A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
  32. “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products. ([n. d.]).
  33. Power and technology: Who gets to make the decisions? Interactions 28, 1 (2020), 38–46.
  34. Paradigmatic controversies, contradictions, and emerging confluences, revisited. The Sage handbook of qualitative research 4 (2011), 97–128.
  35. Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. ACM Conference on Human-Computer Interaction 6, CSCW1 (2022), 1–26.
  36. Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. In 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
  37. Microsoft AI Fairness Checklist. https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4t6dA.
  38. Using the Crowd to Prevent Harmful AI Behavior. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 97 (oct 2020), 25 pages. https://doi.org/10.1145/3415168
  39. Looking past yesterday’s tomorrow: using futures studies methods to extend the research horizon. (April 2013), 10.
  40. Nikolas Martelaro and Wendy Ju. 2020. What could go wrong? Exploring the downsides of autonomous vehicles. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 99–101.
  41. Jessica R. Mesmer-Magnus and Leslie A. DeChurch. 2009. Information Sharing and Team Performance: A Meta-Analysis. Journal of Applied Psychology 94, 2 (2009), 535–546. https://doi.org/10.1037/a0013773
  42. Owning ethics: Corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly 86, 2 (2019), 449–476.
  43. Cade Metz. 2023. ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. The New York Times (May 2023). https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
  44. Cade Metz and Daisuke Wakabayashi. 2020. Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I. The New York Times (Dec. 2020).
  45. Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
  46. Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices. In 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing (2020).
  47. James C Scott. 1990. Domination and the arts of resistance: Hidden transcripts. Yale university press.
  48. Is a privacy crisis experienced, a privacy crisis avoided? Commun. ACM 65, 3 (March 2022), 26–29. https://doi.org/10.1145/3512325
  49. A Meta-Analysis of Data Collection in Serious Games Research. In Serious Games Analytics: Methodologies for Performance Measurement, Assessment, and Improvement, Christian Sebastian Loh, Yanyan Sheng, and Dirk Ifenthaler (Eds.). Springer International Publishing, Cham, 31–55. https://doi.org/10.1007/978-3-319-05834-4_2
  50. Anselm Strauss and Juliet Corbin. 1990. Basics of qualitative research. Sage publications.
  51. Critical Affects: Tech Work Emotions Amidst the Techlash. ACM Conference on Human-Computer Interaction 5, CSCW1 (2021), 1–27.
  52. Lucy Suchman. 2002. Located accountabilities in technology production. Scandinavian journal of information systems 14, 2 (2002), 7.
  53. Elizabeth C. Tomlinson. 2020. Stasis in the Shark Tank: Persuading an Audience of Funders to Act on Behalf of Entrepreneurs. Journal of Business and Technical Communication (March 2020). https://doi.org/10.1177/1050651920910219 Publisher: SAGE PublicationsSage CA: Los Angeles, CA.
  54. Rama Adithya Varanasi and Nitesh Goyal. 2023. “It is Currently Hodgepodge”: Examining AI/ML Practitioners’ Challenges during Co-Production of Responsible AI Values. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 251, 17 pages. https://doi.org/10.1145/3544548.3580903
  55. Diane Vaughan. 1996. The Challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago press.
  56. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In 2018 chi conference on human factors in computing systems. 1–14.
  57. Robert S Weiss. 1995. Learning from strangers: The art and method of qualitative interview studies. Simon and Schuster.
  58. Trust in Collaborative Automation in High Stakes Software Engineering Work: A Case Study at NASA. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
  59. David Gray Widder and Dawn Nafus. 2023. Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data and Society (2023), 1–12. https://doi.org/10.1177/20539517231177620
  60. Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes. In conference on fairness, accountability, and transparency.
  61. It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). ACM, Chicago IL, USA. https://doi.org/10.1145/3593013.3594012
  62. Phil Wilkinson. 2016. A brief history of serious games. In Entertainment Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, July 5-10, 2015, Revised Selected Papers. Springer, 17–41.
  63. Richmond Y. Wong. 2021. Tactics of Soft Resistance in User Experience Professionals’ Values Work. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 355 (oct 2021), 28 pages. https://doi.org/10.1145/3479499
  64. Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics. ACM Conference on Human-Computer Interaction 7, CSCW1 (April 2023), 1–27. https://doi.org/10.1145/3579621
  65. Sustainable AI: Environmental Implications, Challenges and Opportunities. arXiv:2111.00364 [cs]
  66. Thomas Zimmermann. 2016. Card-sorting: From text to themes. In Perspectives on data science for software engineering. Elsevier, 137–141.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. David Gray Widder (12 papers)
  2. Laura Dabbish (13 papers)
  3. James Herbsleb (5 papers)
  4. Nikolas Martelaro (24 papers)
Citations (1)

Summary

Investigating the Dynamics of Team Discussions on AI Ethics Through Gameplay

Introduction to the Study

This paper explores team dynamics in discussions of AI ethics through the lens of gameplay, employing the "What Could Go Wrong?" game as a structured yet open-ended medium for broaching complex ethical considerations in AI development. The research focuses on how the game environment impacts the breadth of ethical discussions compared to typical workplace deliberations on AI ethics. It explores the interplay of power relations, the concept of "scope" in ethical considerations, and the potential of game-based interventions in identifying allies for ethical advocacies within professional settings.

Methodology and Participant Overview

The research engaged with three corporate teams and one activist group, all with a history of working together on ethical discussions surrounding AI. By participating in the "What Could Go Wrong?" game, these groups were encouraged to engage in speculative discussions about potential ethical issues in AI applications, diverging from their routine professional constraints and discussions. Follow-up interviews sought to draw contrasts between in-game ethical deliberations and those occurring in their standard operational contexts. This dual approach facilitated an exploration of the intricate dynamics underpinning AI ethics discussions within teams, with a keen eye on the differing impacts of in-game versus traditional dialogue.

Insights on License to Critique and Discussion Dynamics

The game's hypothetical context empowered participants to broach topics and ethical dilemmas that typically remain unaddressed within the confines of regular work discussions. This expanded "license to critique" uncovered a broader range of ethical concerns, challenging the usual boundaries set by professional scopes and highlighting the nuanced role of power dynamics in constraining or enabling ethical discussions. Notably, the presence of managers or individuals in positions of authority was found to significantly influence the willingness of participants to raise ethical concerns, underlining the importance of hierarchy and perceived repercussions on the freedom and breadth of ethical deliberations.

The Role of Scope in Ethical Discussions

Discussions revealed a marked emphasis on the concept of "scope" - a guiding principle delineating the bounds of what is deemed relevant or actionable within AI ethics considerations in professional settings. Participants noted a prevalent prioritization of concerns directly tied to project deliverables or perceived as being within the team's immediate capacity to address, often sidelining broader, systemic, or speculative ethical issues deemed "out of scope." This phenomenon suggests a structural and conceptual narrowing of ethical discourse in professional environments, potentially stifling a comprehensive examination of AI's ethical implications.

The Impact of Game-based Discussions

Notwithstanding the expanded scope of discourse the gameplay facilitated, transferring the breadth of these discussions into actionable insights or changes in professional practice remains challenging. Participants expressed skepticism about the direct applicability of speculative, game-driven ethical deliberations to real-world project considerations, often constrained by existing organizational structures, priorities, and risk assessments. However, the game setting created opportunities for team members to uncover and align with others’ ethical stances and concerns, fostering connections that could lay the groundwork for future collective ethical advocacies within their professional spheres.

Conclusion and Future Directions

This paper underscores the complex interplay of power dynamics, organizational norms, and the constraining concept of "scope" in shaping team discussions on AI ethics. While game-based interventions offer a promising avenue to broaden the horizon of ethical considerations, translating these expansive discussions into tangible organizational changes poses a significant challenge. Future research should further explore mechanisms to bridge this gap, ensuring that the rich ethical discourses fostered in speculative environments can meaningfully inform and transform AI development practices towards more ethically aware and inclusive outcomes.

The findings call for a deeper engagement with the structures and norms that circumscribe ethical discourses in professional settings, advocating for strategies that could enhance the license to critique and ultimately foster a more open, inclusive, and actionable ethical deliberation process in AI development.