Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers (2312.10076v2)

Published 8 Dec 2023 in cs.CY

Abstract: Organisations generate vast amounts of information, which has resulted in a long-term research effort into knowledge access systems for enterprise settings. Recent developments in artificial intelligence, in relation to LLMs, are poised to have significant impact on knowledge access. This has the potential to shape the workplace and knowledge in new and unanticipated ways. Many risks can arise from the deployment of these types of AI systems, due to interactions between the technical system and organisational power dynamics. This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems. We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing. The contribution of our framework is to additionally consider (i) the consequences of these systems that are of moral import: commodification, appropriation, concentration of power, and marginalisation, and (ii) the mechanisms, which represent how these consequences may take effect in the system. The mechanisms are a means of contextualising risk within specific system processes, which is critical for mitigation. This framework is aimed at helping practitioners involved in the design and deployment of AI-mediated knowledge access systems to consider the risks introduced to workers, identify the precise system mechanisms that introduce those risks and begin to approach mitigation. Future work could apply this framework to other technological systems to promote the protection of workers and other groups.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (117)
  1. Large language models associate Muslims with violence. Nature Machine Intelligence 2021 3:6 3 (6 2021), 461–463. Issue 6. https://doi.org/10.1038/s42256-021-00359-2
  2. Danielle Abril. 2023. Companies want to use AI tracking to make you better at your job. The Washington Post. https://www.washingtonpost.com/
  3. Sharing knowledge and expertise: The CSCW view of knowledge management. Computer Supported Cooperative Work: CSCW: An International Journal 22 (8 2013), 531–573. Issue 4-6. https://doi.org/10.1007/S10606-013-9192-8/METRICS
  4. Mark S Ackerman and Christine Halverson. 1999. Organizational memory: processes, boundary objects, and trajectories. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers. IEEE, Hawaii, 12–pp.
  5. Sharing Expertise: Beyond Knowledge Management. MIT Press, Cambridge, MA, USA.
  6. Sam Adler-Bell and Michelle Miller. 2018. The datafication of employment: How surveillance and capitalism are shaping workers’ futures without their knowledge. The Century Foundation, New York, NY.
  7. A taxonomy of cyber-harms: Defining the impacts of cyber-attacks and understanding how they propagate. Journal of Cybersecurity 4, 1 (2018), 1–15.
  8. Ifeoma Ajunwa. 2020. The “Black Box” at Work. Big Data & Society 7, 2 (2020), 1–5.
  9. Ifeoma Ajunwa. 2023. The Quantified Worker: Law and Technology in the Modern Workplace. Cambridge University Press, Cambridge, UK. https://doi.org/10.1017/9781316888681
  10. Limitless Worker Surveillance. California Law Review 105 (2017), 735. https://doi.org/10.15779/Z38BR8MF94
  11. Ifeoma Ajunwa and Daniel Greene. 2019. Platforms at work: Automated hiring platforms and other new intermediaries in the organization of work. In Work and labor in the digital age. Vol. 33. Emerald Publishing Limited, Bingley, UK, 61–91.
  12. Saima Akhtar. 2021. Employers’ new tools to surveil and monitor workers are historically rooted. Washington Post. https://www.washingtonpost.com/
  13. Ali Alkhatib. 2021. To Live in Their Utopia: Why Algorithmic Systems Create Absurd Outcomes. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 95, 9 pages. https://doi.org/10.1145/3411764.3445740
  14. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica.
  15. Jack Bandy. 2021. Problematic machine behavior: A systematic literature review of algorithm audits. Proceedings of the acm on human-computer interaction 5, CSCW1 (2021), 1–34.
  16. A Unified Taxonomy of Harmful Content. In Proceedings of the Fourth Workshop on Online Abuse and Harms, Seyi Akiwowo, Bertie Vidgen, Vinodkumar Prabhakaran, and Zeerak Waseem (Eds.). Association for Computational Linguistics, Online, 125–137. https://doi.org/10.18653/v1/2020.alw-1.16
  17. The problem with bias: Allocative versus representational harms in machine learning. 9th Annual conference of the special interest group for computing, information and society, SIGCIS.
  18. Solon Barocas and Andrew D Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104, 3 (2016), 671–732.
  19. Nancy Baym and Nicole B Ellison. 2023. Toward work’s new futures: Editors’ Introduction to Technology and the Future of Work special issue. Journal of Computer-Mediated Communication 28 (6 2023), 1–5. Issue 4. https://doi.org/10.1093/JCMC/ZMAD031
  20. What a Year of WFH Has Done to Our Relationships at Work. Harvard Business Review. https://hbr.org/2021/03/what-a-year-of-wfh-has-done-to-our-relationships-at-work
  21. Ruha Benjamin. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Cambridge, UK.
  22. Data and Algorithms at Work: The Case for Worker Technology Rights. UC Berkeley: Center for Labor Research and Education. https://escholarship.org/uc/item/9831k83p
  23. Demographic Dialectal Variation in Social Media: A Case Study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Jian Su, Kevin Duh, and Xavier Carreras (Eds.). Association for Computational Linguistics, Austin, Texas, 1119–1130. https://doi.org/10.18653/v1/D16-1120
  24. Miranda Bogen and A. Rieke. 2018. Help wanted: an examination of hiring algorithms, equity, and bias. Upturn, Los Angeles, CA.
  25. On the Opportunities and Risks of Foundation Models. arXiv:2108.07258 [cs.LG]
  26. Geoffrey C Bowker and Susan Leigh Star. 2000. Sorting things out: Classification and its consequences. MIT press, Cambridge, MA.
  27. Erik Brynjolfsson. 2022. The Turing Trap: The Promise and Peril of Human-Like Artificial Intelligence. Daedalus 151 (1 2022), 272–287. Issue 2. https://doi.org/10.1162/DAED_a_01915
  28. Taina Bucher. 2019. The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. In The Social Power of Algorithms. Routledge, London, UK, 30–44.
  29. Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, online, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
  30. Predatory value: Economies of dispossession and disturbed relationalities. Social Text 36, 2 (2018), 1–18.
  31. Extracting training data from large language models. 30th USENIX Security Symposium (USENIX Security 21) 30, 21 (2021), 2633–2650.
  32. Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 569–582. https://doi.org/10.1145/3593013.3594023
  33. Vanessa Ciccone. 2023. Transparency, openness and privacy among software professionals: discourses and practices surrounding use of the digital calendar. Journal of Computer-Mediated Communication 28 (6 2023), 1–10. Issue 4. https://doi.org/10.1093/JCMC/ZMAD015
  34. Paul H Cleverley and Simon Burnett. 2019. Enterprise search and discovery capability: the factors and generative mechanisms for user satisfaction. Journal of information science 45, 1 (2019), 29–52.
  35. Through the Looking Glass: Transparency Implications and Challenges in Enterprise AI Knowledge Systems. arXiv Preprint (to appear).
  36. Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. The MIT Press, Cambridge, MA. https://doi.org/10.7551/MITPRESS/12255.001.0001
  37. Nick Couldry and Ulises A. Mejias. 2018. Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television and New Media 20 (9 2018), 336–349. Issue 4. https://doi.org/10.1177/1527476418796632
  38. Kate Crawford. 2021. The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven, CT.
  39. Kate Crawford. 2023. Unpacking AI: "An Exponential Disruption". https://www.msnbc.com/msnbc-podcast/
  40. Kimberlé Crenshaw. 2017. On Intersectionality: Essential Writings. Faculty Books 255 (2017), 320.
  41. Roderic Crooks and Morgan Currie. 2021. Numbers will not save us: Agonistic data practices. The Information Society 37 (5 2021), 201–213. Issue 4. https://doi.org/10.1080/01972243.2021.1920081
  42. Fairness is Not Static: Deeper Understanding of Long Term Fairness via Simulation Studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 525–534. https://doi.org/10.1145/3351095.3372878
  43. " Jol" or" Pani"?: How Does Governance Shape a Platform’s Identity? Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
  44. Michael A. DeVito. 2021. Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms. Proceedings of the ACM on Human-Computer Interaction 5 (10 2021), 38. Issue CSCW2. https://doi.org/10.1145/3476080
  45. The Algorithm and the User: How Can HCI Use Lay Understandings of Algorithmic Systems?. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3170427.3186320
  46. Evaluating Stochastic Rankings with Expected Exposure. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (Virtual Event, Ireland) (CIKM ’20). Association for Computing Machinery, New York, NY, USA, 275–284. https://doi.org/10.1145/3340531.3411962
  47. Ashley Doane. 2017. Beyond color-blindness: (Re)theorizing racial ideology. Sociological Perspectives 60, 5 (2017), 975–991.
  48. Veena Dubal. 2023. On Algorithmic Wage Discrimination. https://doi.org/10.2139/SSRN.4331080
  49. Fairness in Information Access Systems. Foundations and Trends® in Information Retrieval 16, 1-2 (2022), 1–177.
  50. Wendy Nelson Espeland and Mitchell L Stevens. 1998. Commensuration as a social process. Annual review of sociology 24, 1 (1998), 313–343.
  51. Iason Gabriel and Vafa Ghazavi. 2021. The Challenge of Value Alignment: from Fairer Algorithms to AI Safety. https://arxiv.org/abs/2101.06060v2
  52. Datasheets for datasets. Commun. ACM 64 (12 2021), 86–92. Issue 12. https://doi.org/10.1145/3458723
  53. Co-audit: tools to help humans double-check AI-generated content. arXiv preprint arXiv:2310.01297.
  54. Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valley from building a new global underclass. Harper Collins, New York, NY.
  55. People + AI Guidebook. 2019. Mental Models. Google PAIR. https://pair.withgoogle.com/chapter/mental-models/
  56. Technological Change in Five Industries: Threats to Jobs, Wages, and Working Conditions. Berkeley Labor Center. https://laborcenter.berkeley.edu/
  57. Donna Haraway. 1998. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective (1988). Feminist Studies 14, 3 (1 1998), 236–240. https://doi.org/10.2307/3178066
  58. Algorithmic Collective Action in Machine Learning. https://arxiv.org/abs/2302.04262v2
  59. Sheila Jasanoff. 2004. States of knowledge: The co-production of science and the social order. Routledge Taylor and Francis Group, London, UK. 1–317 pages. https://doi.org/10.4324/9780203413845
  60. Evaluating the Accuracy of Implicit Feedback from Clicks and Query Reformulations in Web Search. ACM Trans. Inf. Syst. 25, 2 (Apr 2007), 7–es. https://doi.org/10.1145/1229179.1229181
  61. Gabbrielle M. Johnson. 2020. Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning. , 35 pages. https://doi.org/10.1163/17455243-20234372
  62. Jeffrey Alan Johnson. 2014. From open data to information justice. Ethics and Information Technology 16 (12 2014), 263–274. Issue 4. https://doi.org/10.1007/S10676-014-9351-8/METRICS
  63. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 6282–6293. https://aclanthology.org/2020.acl-main.560
  64. Wendy Ju and Larry Leifer. 2008. The design of implicit interactions: Making interactive systems less obnoxious. Design Issues 24, 3 (2008), 72–84.
  65. Rikke Frank Jørgensen and Anja Bechmann. 2019. Human Rights in the Age of Platforms. The MIT Press, Cambridge, MA, Chapter Data as Humans: Representation, Accountability, and Equality in Big Data, 73–94. https://doi.org/10.7551/MITPRESS/11304.003.0008
  66. Taxonomizing and Measuring Representational Harms: A Look at Image Tagging. arXiv preprint arXiv:2305.01776.
  67. Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models. Advances in Neural Information Processing Systems 4 (2 2021), 2611–2624. https://arxiv.org/abs/2102.04130v3
  68. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453.
  69. Ida Larsen-Ledet and Siân Lindley. 2022. Ways of seeing and being seen: People in the algorithmic knowledge base. Workshop at the 20th Eur. Conf. Comput.-Supported Cooperative Work.
  70. Ethical and Social Considerations in Automatic Expert Identification and People Recommendation in Organizational Knowledge Management Systems. In Proceedings of the 5th FAccTRec Workshop on Responsible Recommendation at RecSys 2022. ACM, New York, NY, 1–6.
  71. The Dimensions of Data Labor: A Road Map for Researchers, Activists, and Policymakers to Empower Data Producers. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1151–1161. https://doi.org/10.1145/3593013.3594070
  72. Actions and their Consequences? Implicit Interactions with Workplace Knowledge Bases. , 6 pages. AutomationXP CHI.
  73. Siân E. Lindley and Denise J. Wilkins. 2023. Building Knowledge through Action: Considerations for Machine Learning in the Workplace. ACM Trans. Comput.-Hum. Interact. 30, 5, Article 72 (Sep 2023), 51 pages. https://doi.org/10.1145/3584947
  74. Karl Marx. 1844. Comments on James Mill, Éléments D’économie Politique.
  75. Salil K. Mehra. 2011. Paradise is a Walled Garden? Trust, Antitrust and User Dynamism. George Mason Law Review, doi: 10.1016/B978-0-407-76001-1.50020-X.
  76. Microsoft 2023. Microsoft Viva Topics Overview. Microsoft. https://learn.microsoft.com
  77. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy and Technology 33, 1 (2020), 1–26. Issue 405. https://doi.org/10.1007/s13347-020-00405-8
  78. Predicting age groups of Twitter users based on language and metadata features. PLOS ONE 12 (8 2017), e0183537. Issue 8. https://doi.org/10.1371/JOURNAL.PONE.0183537
  79. Algorithmic Management. AI Now Salons. https://ainowinstitute.org/series/ai-now-salon-series
  80. Nathan Newman. 2016. UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace. https://doi.org/10.2139/SSRN.2819142
  81. "How Old Do You Think I Am?" A Study of Language and Age in Twitter. Proceedings of the International AAAI Conference on Web and Social Media 7 (2013), 439–448. Issue 1. https://doi.org/10.1609/ICWSM.V7I1.14381
  82. Ikujiro Nonaka. 1994. A dynamic theory of organizational knowledge creation. Organization science 5, 1 (1994), 14–37.
  83. Ikujiro Nonaka and Ryoko Toyama. 2015. The knowledge-creating theory revisited: knowledge creation as a synthesizing process. Knowledge management research & practice 1, 1 (2015), 95–110.
  84. SECI, Ba and leadership: a unified model of dynamic knowledge creation. Long range planning 33, 1 (2000), 5–34.
  85. OpenAI 2023. GPT-4 Technical Report. OpenAI.
  86. Wanda J Orlikowski. 2002. Knowing in Practice: Enacting a Collective Capability in Distributed Organizing. Organization Science 13 (2002), 249–273. Issue 3.
  87. Wanda J Orlikowski. 2006. Material knowing: the scaffolding of human knowledgeability. European Journal of Information Systems 15 (2006), 460–466.
  88. Matteo Pasquinelli and Vladan Joler. 2021. The Nooscope manifested: AI as instrument of knowledge extractivism. AI and Society 36 (12 2021), 1263–1280. Issue 4. https://doi.org/10.1007/S00146-020-01097-6/METRICS
  89. Phillip H Phan and Theodore Peridis. 2000. Knowledge creation in strategic alliances: Another look at organizational learning. Asia Pacific journal of management 17 (2000), 201–222.
  90. Jeremias Prassl. 2018. Humans as a Service: The Promise and Perils of Work in the Gig Economy. Oxford University Press, Oxford, UK. 1–199 pages. https://doi.org/10.1093/OSO/9780198797012.001.0001
  91. The Fallacy of AI Functionality. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 959–972. https://doi.org/10.1145/3531146.3533158
  92. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India. ACM International Conference Proceeding Series 22 (6 2022), 1917–1928. https://doi.org/10.1145/3531146.3533237
  93. ‘Memories are made of this’: explicating organisational knowledge and memory. European Journal of Information Systems 10 (2001), 113–121.
  94. Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4 (12 2020), 100005. https://doi.org/10.1016/J.JRT.2020.100005
  95. Ruder. 2020. Why You Should Do NLP Beyond English. https://www.ruder.io/nlp-beyond-english/
  96. Advait Sarkar. 2023. Enough With “Human-AI Collaboration”. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 415, 8 pages. https://doi.org/10.1145/3544549.3582735
  97. A framework of severity for harmful content online. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–33.
  98. Using knowledge graphs to search an enterprise data lake. In The Semantic Web: ESWC 2019 Satellite Events: ESWC 2019 Satellite Events. Springer, online, 262–266.
  99. Barış Serim and Giulio Jacucci. 2019. Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3290605.3300647
  100. Sociotechnical harms: scoping a taxonomy for harm reduction. arXiv preprint arXiv:2210.05791.
  101. Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proceedings of the ACM on Human-Computer Interaction 5 (5 2021), 29. Issue CSCW2. https://doi.org/10.1145/3479577
  102. How AI fails us. arXiv preprint arXiv:2201.04200.
  103. Matthew Sparkes. 2023. Filling the internet with AI-created images will harm future AIs. New Scientist. https://www.newscientist.com
  104. Valerio De Stefano. 2018. ‘Negotiating the Algorithm’: Automation, Artificial Intelligence and Labour Protection. https://doi.org/10.2139/SSRN.3178233
  105. Catherine Stinson. 2022. Algorithms are not neutral: Bias in collaborative filtering. AI and Ethics 2, 4 (2022), 763–770.
  106. Understanding the capabilities, limitations, and societal impact of large language models. arXiv preprint arXiv:2102.02503.
  107. Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space 34, 6 (2016), 990–1006.
  108. An investigation of misinformation harms related to social media during humanitarian crises. Communications in Computer and Information Science 1186 CCIS (2020), 167–181. https://doi.org/10.1007/978-981-15-3817-9_10
  109. Data Leverage: A Framework for Empowering the Public in Its Relationship with Technology Companies. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 215–227. https://doi.org/10.1145/3442188.3445885
  110. Judy. Wajcman. 1991. Feminism confronts technology. Penn State Press, University Park, PA. 184 pages.
  111. Ashley Marie Walker and Michael A. DeVito. 2020. "’More Gay’ Fits in Better": Intracommunity Power Dynamics and Harms in Online LGBTQ+ Spaces. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376497
  112. Angelina Wang and Olga Russakovsky. 2021. Directional Bias Amplification. Proceedings of Machine Learning Research 139 (2 2021), 10882–10893. https://arxiv.org/abs/2102.12594v2
  113. Ethical and social risks of harm from Language Models. arXiv preprint arXiv:2112.04359v1.
  114. Sociotechnical Safety Evaluation of Generative AI Systems. arXiv preprint arXiv:2310.11986.
  115. Taxonomy of Risks posed by Language Models. ACM International Conference Proceeding Series 22 (6 2022), 214–229. https://doi.org/10.1145/3531146.3533088
  116. Alexandria: Unsupervised high-precision knowledge base construction using a probabilistic program. Automated Knowledge Base Construction (AKBC).
  117. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. In Proceedings of EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Anna Gausen (1 paper)
  2. Bhaskar Mitra (78 papers)
  3. Siân Lindley (4 papers)
Citations (2)