Simulated Affection, Engineered Trust: How Anthropomorphic AI Benefits Surveillance Capitalism (2511.06472v1)
Abstract: In this paper, we argue that anthropomorphized technology, designed to simulate emotional realism, are not neutral tools but cognitive infrastructures that manipulate user trust and behaviour. This reinforces the logic of surveillance capitalism, an under-regulated economic system that profits from behavioural manipulation and monitoring. Drawing on Nicholas Carr's theory of the intellectual ethic, we identify how technologies such as chatbots, virtual assistants, or generative models reshape not only what we think about ourselves and our world, but how we think at the cognitive level. We identify how the emerging intellectual ethic of AI benefits a system of surveillance capitalism, and discuss the potential ways of addressing this.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.