A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers (2312.10076v2)
Abstract: Organisations generate vast amounts of information, which has resulted in a long-term research effort into knowledge access systems for enterprise settings. Recent developments in artificial intelligence, in relation to LLMs, are poised to have significant impact on knowledge access. This has the potential to shape the workplace and knowledge in new and unanticipated ways. Many risks can arise from the deployment of these types of AI systems, due to interactions between the technical system and organisational power dynamics. This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems. We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing. The contribution of our framework is to additionally consider (i) the consequences of these systems that are of moral import: commodification, appropriation, concentration of power, and marginalisation, and (ii) the mechanisms, which represent how these consequences may take effect in the system. The mechanisms are a means of contextualising risk within specific system processes, which is critical for mitigation. This framework is aimed at helping practitioners involved in the design and deployment of AI-mediated knowledge access systems to consider the risks introduced to workers, identify the precise system mechanisms that introduce those risks and begin to approach mitigation. Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
- Large language models associate Muslims with violence. Nature Machine Intelligence 2021 3:6 3 (6 2021), 461–463. Issue 6. https://doi.org/10.1038/s42256-021-00359-2
- Danielle Abril. 2023. Companies want to use AI tracking to make you better at your job. The Washington Post. https://www.washingtonpost.com/
- Sharing knowledge and expertise: The CSCW view of knowledge management. Computer Supported Cooperative Work: CSCW: An International Journal 22 (8 2013), 531–573. Issue 4-6. https://doi.org/10.1007/S10606-013-9192-8/METRICS
- Mark S Ackerman and Christine Halverson. 1999. Organizational memory: processes, boundary objects, and trajectories. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers. IEEE, Hawaii, 12–pp.
- Sharing Expertise: Beyond Knowledge Management. MIT Press, Cambridge, MA, USA.
- Sam Adler-Bell and Michelle Miller. 2018. The datafication of employment: How surveillance and capitalism are shaping workers’ futures without their knowledge. The Century Foundation, New York, NY.
- A taxonomy of cyber-harms: Defining the impacts of cyber-attacks and understanding how they propagate. Journal of Cybersecurity 4, 1 (2018), 1–15.
- Ifeoma Ajunwa. 2020. The “Black Box” at Work. Big Data & Society 7, 2 (2020), 1–5.
- Ifeoma Ajunwa. 2023. The Quantified Worker: Law and Technology in the Modern Workplace. Cambridge University Press, Cambridge, UK. https://doi.org/10.1017/9781316888681
- Limitless Worker Surveillance. California Law Review 105 (2017), 735. https://doi.org/10.15779/Z38BR8MF94
- Ifeoma Ajunwa and Daniel Greene. 2019. Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. In Work and labor in the digital age. Vol. 33. Emerald Publishing Limited, Bingley, UK, 61–91.
- Saima Akhtar. 2021. Employers’ new tools to surveil and monitor workers are historically rooted. Washington Post. https://www.washingtonpost.com/
- Ali Alkhatib. 2021. To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 95, 9 pages. https://doi.org/10.1145/3411764.3445740
- Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.
- Jack Bandy. 2021. Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the acm on human-computer interaction 5, CSCW1 (2021), 1–34.
- A Unified Taxonomy of Harmful Content. In Proceedings of the Fourth Workshop on Online Abuse and Harms, Seyi Akiwowo, Bertie Vidgen, Vinodkumar Prabhakaran, and Zeerak Waseem (Eds.). Association for Computational Linguistics, Online, 125–137. https://doi.org/10.18653/v1/2020.alw-1.16
- The problem with bias: Allocative versus representational harms in machine learning. 9th Annual conference of the special interest group for computing, information and society, SIGCIS.
- Solon Barocas and Andrew D Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104, 3 (2016), 671–732.
- Nancy Baym and Nicole B Ellison. 2023. Toward work’s new futures: Editors’ Introduction to Technology and the Future of Work special issue. Journal of Computer-Mediated Communication 28 (6 2023), 1–5. Issue 4. https://doi.org/10.1093/JCMC/ZMAD031
- What a Year of WFH Has Done to Our Relationships at Work. Harvard Business Review. https://hbr.org/2021/03/what-a-year-of-wfh-has-done-to-our-relationships-at-work
- Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Cambridge, UK.
- Data and Algorithms at Work: The Case for Worker Technology Rights. UC Berkeley: Center for Labor Research and Education. https://escholarship.org/uc/item/9831k83p
- Demographic Dialectal Variation in Social Media: A Case Study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Jian Su, Kevin Duh, and Xavier Carreras (Eds.). Association for Computational Linguistics, Austin, Texas, 1119–1130. https://doi.org/10.18653/v1/D16-1120
- Miranda Bogen and A. Rieke. 2018. Help wanted: an examination of hiring algorithms, equity, and bias. Upturn, Los Angeles, CA.
- On the Opportunities and Risks of Foundation Models. arXiv:2108.07258 [cs.LG]
- Geoffrey C Bowker and Susan Leigh Star. 2000. Sorting things out: Classification and its consequences. MIT press, Cambridge, MA.
- Erik Brynjolfsson. 2022. The Turing Trap: The Promise and Peril of Human-Like Artificial Intelligence. Daedalus 151 (1 2022), 272–287. Issue 2. https://doi.org/10.1162/DAED_a_01915
- Taina Bucher. 2019. The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. In The Social Power of Algorithms. Routledge, London, UK, 30–44.
- Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, online, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
- Predatory value: Economies of dispossession and disturbed relationalities. Social Text 36, 2 (2018), 1–18.
- Extracting training data from large language models. 30th USENIX Security Symposium (USENIX Security 21) 30, 21 (2021), 2633–2650.
- Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 569–582. https://doi.org/10.1145/3593013.3594023
- Vanessa Ciccone. 2023. Transparency, openness and privacy among software professionals: discourses and practices surrounding use of the digital calendar. Journal of Computer-Mediated Communication 28 (6 2023), 1–10. Issue 4. https://doi.org/10.1093/JCMC/ZMAD015
- Paul H Cleverley and Simon Burnett. 2019. Enterprise search and discovery capability: the factors and generative mechanisms for user satisfaction. Journal of information science 45, 1 (2019), 29–52.
- Through the Looking Glass: Transparency Implications and Challenges in Enterprise AI Knowledge Systems. arXiv Preprint (to appear).
- Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. The MIT Press, Cambridge, MA. https://doi.org/10.7551/MITPRESS/12255.001.0001
- Nick Couldry and Ulises A. Mejias. 2018. Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television and New Media 20 (9 2018), 336–349. Issue 4. https://doi.org/10.1177/1527476418796632
- Kate Crawford. 2021. The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven, CT.
- Kate Crawford. 2023. Unpacking AI: "An Exponential Disruption". https://www.msnbc.com/msnbc-podcast/
- Kimberlé Crenshaw. 2017. On Intersectionality: Essential Writings. Faculty Books 255 (2017), 320.
- Roderic Crooks and Morgan Currie. 2021. Numbers will not save us: Agonistic data practices. The Information Society 37 (5 2021), 201–213. Issue 4. https://doi.org/10.1080/01972243.2021.1920081
- Fairness is Not Static: Deeper Understanding of Long Term Fairness via Simulation Studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 525–534. https://doi.org/10.1145/3351095.3372878
- " Jol" or" Pani"?: How Does Governance Shape a Platform’s Identity? Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
- Michael A. DeVito. 2021. Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms. Proceedings of the ACM on Human-Computer Interaction 5 (10 2021), 38. Issue CSCW2. https://doi.org/10.1145/3476080
- The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems?. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3186320
- Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (Virtual Event, Ireland) (CIKM ’20). Association for Computing Machinery, New York, NY, USA, 275–284. https://doi.org/10.1145/3340531.3411962
- Ashley Doane. 2017. Beyond color-blindness: (Re)theorizing racial ideology. Sociological Perspectives 60, 5 (2017), 975–991.
- Veena Dubal. 2023. On Algorithmic Wage Discrimination. https://doi.org/10.2139/SSRN.4331080
- Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16, 1-2 (2022), 1–177.
- Wendy Nelson Espeland and Mitchell L Stevens. 1998. Commensuration as a social process. Annual review of sociology 24, 1 (1998), 313–343.
- Iason Gabriel and Vafa Ghazavi. 2021. The Challenge of Value Alignment: from Fairer Algorithms to AI Safety. https://arxiv.org/abs/2101.06060v2
- Datasheets for datasets. Commun. ACM 64 (12 2021), 86–92. Issue 12. https://doi.org/10.1145/3458723
- Co-audit: tools to help humans double-check AI-generated content. arXiv preprint arXiv:2310.01297.
- Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Harper Collins, New York, NY.
- People + AI Guidebook. 2019. Mental Models. Google PAIR. https://pair.withgoogle.com/chapter/mental-models/
- Technological Change in Five Industries: Threats to Jobs, Wages, and Working Conditions. Berkeley Labor Center. https://laborcenter.berkeley.edu/
- Donna Haraway. 1998. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective (1988). Feminist Studies 14, 3 (1 1998), 236–240. https://doi.org/10.2307/3178066
- Algorithmic Collective Action in Machine Learning. https://arxiv.org/abs/2302.04262v2
- Sheila Jasanoff. 2004. States of knowledge: The co-production of science and the social order. Routledge Taylor and Francis Group, London, UK. 1–317 pages. https://doi.org/10.4324/9780203413845
- Evaluating the Accuracy of Implicit Feedback from Clicks and Query Reformulations in Web Search. ACM Trans. Inf. Syst. 25, 2 (Apr 2007), 7–es. https://doi.org/10.1145/1229179.1229181
- Gabbrielle M. Johnson. 2020. Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning. , 35 pages. https://doi.org/10.1163/17455243-20234372
- Jeffrey Alan Johnson. 2014. From open data to information justice. Ethics and Information Technology 16 (12 2014), 263–274. Issue 4. https://doi.org/10.1007/S10676-014-9351-8/METRICS
- The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 6282–6293. https://aclanthology.org/2020.acl-main.560
- Wendy Ju and Larry Leifer. 2008. The design of implicit interactions: Making interactive systems less obnoxious. Design Issues 24, 3 (2008), 72–84.
- Rikke Frank Jørgensen and Anja Bechmann. 2019. Human Rights in the Age of Platforms. The MIT Press, Cambridge, MA, Chapter Data as Humans: Representation, Accountability, and Equality in Big Data, 73–94. https://doi.org/10.7551/MITPRESS/11304.003.0008
- Taxonomizing and Measuring Representational Harms: A Look at Image Tagging. arXiv preprint arXiv:2305.01776.
- Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models. Advances in Neural Information Processing Systems 4 (2 2021), 2611–2624. https://arxiv.org/abs/2102.04130v3
- Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453.
- Ida Larsen-Ledet and Siân Lindley. 2022. Ways of seeing and being seen: People in the algorithmic knowledge base. Workshop at the 20th Eur. Conf. Comput.-Supported Cooperative Work.
- Ethical and Social Considerations in Automatic Expert Identification and People Recommendation in Organizational Knowledge Management Systems. In Proceedings of the 5th FAccTRec Workshop on Responsible Recommendation at RecSys 2022. ACM, New York, NY, 1–6.
- The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1151–1161. https://doi.org/10.1145/3593013.3594070
- Actions and their Consequences? Implicit Interactions with Workplace Knowledge Bases. , 6 pages. AutomationXP CHI.
- Siân E. Lindley and Denise J. Wilkins. 2023. Building Knowledge through Action: Considerations for Machine Learning in the Workplace. ACM Trans. Comput.-Hum. Interact. 30, 5, Article 72 (Sep 2023), 51 pages. https://doi.org/10.1145/3584947
- Karl Marx. 1844. Comments on James Mill, Éléments D’économie Politique.
- Salil K. Mehra. 2011. Paradise is a Walled Garden? Trust, Antitrust and User Dynamism. George Mason Law Review, doi: 10.1016/B978-0-407-76001-1.50020-X.
- Microsoft 2023. Microsoft Viva Topics Overview. Microsoft. https://learn.microsoft.com
- Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy and Technology 33, 1 (2020), 1–26. Issue 405. https://doi.org/10.1007/s13347-020-00405-8
- Predicting age groups of Twitter users based on language and metadata features. PLOS ONE 12 (8 2017), e0183537. Issue 8. https://doi.org/10.1371/JOURNAL.PONE.0183537
- Algorithmic Management. AI Now Salons. https://ainowinstitute.org/series/ai-now-salon-series
- Nathan Newman. 2016. UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace. https://doi.org/10.2139/SSRN.2819142
- "How Old Do You Think I Am?" A Study of Language and Age in Twitter. Proceedings of the International AAAI Conference on Web and Social Media 7 (2013), 439–448. Issue 1. https://doi.org/10.1609/ICWSM.V7I1.14381
- Ikujiro Nonaka. 1994. A dynamic theory of organizational knowledge creation. Organization science 5, 1 (1994), 14–37.
- Ikujiro Nonaka and Ryoko Toyama. 2015. The knowledge-creating theory revisited: knowledge creation as a synthesizing process. Knowledge management research & practice 1, 1 (2015), 95–110.
- SECI, Ba and leadership: a unified model of dynamic knowledge creation. Long range planning 33, 1 (2000), 5–34.
- OpenAI 2023. GPT-4 Technical Report. OpenAI.
- Wanda J Orlikowski. 2002. Knowing in Practice: Enacting a Collective Capability in Distributed Organizing. Organization Science 13 (2002), 249–273. Issue 3.
- Wanda J Orlikowski. 2006. Material knowing: the scaffolding of human knowledgeability. European Journal of Information Systems 15 (2006), 460–466.
- Matteo Pasquinelli and Vladan Joler. 2021. The Nooscope manifested: AI as instrument of knowledge extractivism. AI and Society 36 (12 2021), 1263–1280. Issue 4. https://doi.org/10.1007/S00146-020-01097-6/METRICS
- Phillip H Phan and Theodore Peridis. 2000. Knowledge creation in strategic alliances: Another look at organizational learning. Asia Pacific journal of management 17 (2000), 201–222.
- Jeremias Prassl. 2018. Humans as a Service: The Promise and Perils of Work in the Gig Economy. Oxford University Press, Oxford, UK. 1–199 pages. https://doi.org/10.1093/OSO/9780198797012.001.0001
- The Fallacy of AI Functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 959–972. https://doi.org/10.1145/3531146.3533158
- How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. ACM International Conference Proceeding Series 22 (6 2022), 1917–1928. https://doi.org/10.1145/3531146.3533237
- ‘Memories are made of this’: explicating organisational knowledge and memory. European Journal of Information Systems 10 (2001), 113–121.
- Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4 (12 2020), 100005. https://doi.org/10.1016/J.JRT.2020.100005
- Ruder. 2020. Why You Should Do NLP Beyond English. https://www.ruder.io/nlp-beyond-english/
- Advait Sarkar. 2023. Enough With “Human-AI Collaboration”. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 415, 8 pages. https://doi.org/10.1145/3544549.3582735
- A framework of severity for harmful content online. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–33.
- Using knowledge graphs to search an enterprise data lake. In The Semantic Web: ESWC 2019 Satellite Events: ESWC 2019 Satellite Events. Springer, online, 262–266.
- Barış Serim and Giulio Jacucci. 2019. Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3290605.3300647
- Sociotechnical harms: scoping a taxonomy for harm reduction. arXiv preprint arXiv:2210.05791.
- Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proceedings of the ACM on Human-Computer Interaction 5 (5 2021), 29. Issue CSCW2. https://doi.org/10.1145/3479577
- How AI fails us. arXiv preprint arXiv:2201.04200.
- Matthew Sparkes. 2023. Filling the internet with AI-created images will harm future AIs. New Scientist. https://www.newscientist.com
- Valerio De Stefano. 2018. ‘Negotiating the Algorithm’: Automation, Artificial Intelligence and Labour Protection. https://doi.org/10.2139/SSRN.3178233
- Catherine Stinson. 2022. Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics 2, 4 (2022), 763–770.
- Understanding the capabilities, limitations, and societal impact of large language models. arXiv preprint arXiv:2102.02503.
- Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space 34, 6 (2016), 990–1006.
- An investigation of misinformation harms related to social media during humanitarian crises. Communications in Computer and Information Science 1186 CCIS (2020), 167–181. https://doi.org/10.1007/978-981-15-3817-9_10
- Data Leverage: A Framework for Empowering the Public in Its Relationship with Technology Companies. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 215–227. https://doi.org/10.1145/3442188.3445885
- Judy. Wajcman. 1991. Feminism confronts technology. Penn State Press, University Park, PA. 184 pages.
- Ashley Marie Walker and Michael A. DeVito. 2020. "’More Gay’ Fits in Better": Intracommunity Power Dynamics and Harms in Online LGBTQ+ Spaces. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376497
- Angelina Wang and Olga Russakovsky. 2021. Directional Bias Amplification. Proceedings of Machine Learning Research 139 (2 2021), 10882–10893. https://arxiv.org/abs/2102.12594v2
- Ethical and social risks of harm from Language Models. arXiv preprint arXiv:2112.04359v1.
- Sociotechnical Safety Evaluation of Generative AI Systems. arXiv preprint arXiv:2310.11986.
- Taxonomy of Risks posed by Language Models. ACM International Conference Proceeding Series 22 (6 2022), 214–229. https://doi.org/10.1145/3531146.3533088
- Alexandria: Unsupervised high-precision knowledge base construction using a probabilistic program. Automated Knowledge Base Construction (AKBC).
- Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing.
- Anna Gausen (1 paper)
- Bhaskar Mitra (78 papers)
- Siân Lindley (4 papers)