- The paper argues that framing AI as a collaborator obscures essential contributions from developers and data annotators.
- It highlights exploitative labor conditions in data annotation, drawing parallels with historical labor injustices.
- The analysis advocates redefining AI as a tool to promote transparency and ethical recognition of human expertise.
Enough With "Human-AI Collaboration"
Introduction
The paper "Enough With 'Human-AI Collaboration'" critically examines the prevailing metaphor of "human-AI collaboration," which positions AI systems as collaborators or partners in work. This metaphor, while intended to promote positive engagement with AI, is argued to obscure the true relationship between AI developers, data annotators, and end-users. Instead, the paper suggests viewing AI as a tool or instrument, which better captures the dynamics and equity of the involved human roles.
The Agentistic Turn in Human-Computer Interaction
Human-computer interaction (HCI) has experienced an "agentistic turn," where AI systems are increasingly treated as entities with agency or personhood, akin to human teammates. This turn is evident in the rising use of terms like "human-AI collaboration" within academic literature, as indicated by the dramatic increase in publications featuring this phrase (Figure 1). The paper argues that this perspective assigns undue human-like qualities to AI systems and overlooks the critical human labor behind their functionality.
Figure 1: The use of the phrase "human-AI collaboration" in academic scholarship has dramatically increased in the last five years.
The Hidden Labor Behind AI Systems
AI systems heavily rely on labeled training data, primarily sourced from data annotation industries in the Global South. Workers in these industries, often operating under exploitative conditions, are integral to creating datasets that AI systems learn from. This invisible labor is critical, yet it is frequently overlooked in discussions of human-AI collaboration. Many annotators work under intense conditions for minimal compensation, such as those at companies like Infolks in India (Figure 2).
Figure 2: Employees of the Indian data labeling company Infolks, based in Kumaramputhur, Kerala.
The Ethical Implications of AI's Labor Distancing
Annotators and field workers provide essential domain expertise that is commodified and distanced as it feeds into AI systems. The paper highlights historical parallels in labor distancing, drawing on examples from colonial plantations and modern digital platforms. The ethical consequences are significant, as this distancing obscures the contributions of human labor and knowledge, perpetuating existing inequities.
Alternatives to Human-Labeled Data
The paper explores alternatives to traditional data annotation, such as synthetic datasets and self-play in constrained environments like games. However, these alternatives have limitations and cannot universally replace the nuanced expertise offered by human annotators.
The paper advocates for understanding AI as a sophisticated tool rather than a collaborator. Unlike collaborators, AI systems lack the genuine agency and understanding of human counterparts. The notion of AI as a tool aligns better with current technological capabilities and ethical considerations, promoting clearer attribution of human labor and expertise.
The metaphor of human-AI collaboration is challenged for its lack of precision and potential to exacerbate labor exploitation. The paper calls for a shift in terminology, focusing on equitable recognition of all human contributors to AI systems, prioritizing transparency and fairness.
Conclusion
The paper concludes that redefining AI interactions from "collaboration" to "tool use" promotes a more accurate and ethical representation of human involvement in AI systems. By acknowledging the diverse human contributions, the paper seeks to prevent labor exploitation and foster an inclusive narrative around AI technologies.