Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 99 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 28 tok/s
GPT-5 High 35 tok/s Pro
GPT-4o 94 tok/s
GPT OSS 120B 476 tok/s Pro
Kimi K2 190 tok/s Pro
2000 character limit reached

The Quasi-Creature and the Uncanny Valley of Agency: A Synthesis of Theory and Evidence on User Interaction with Inconsistent Generative AI (2508.18563v1)

Published 25 Aug 2025 in cs.CY and cs.AI

Abstract: The user experience with large-scale generative AI is paradoxical: superhuman fluency meets absurd failures in common sense and consistency. This paper argues that the resulting potent frustration is an ontological problem, stemming from the "Quasi-Creature"-an entity simulating intelligence without embodiment or genuine understanding. Interaction with this entity precipitates the "Uncanny Valley of Agency," a framework where user comfort drops when highly agentic AI proves erratically unreliable. Its failures are perceived as cognitive breaches, causing profound cognitive dissonance. Synthesizing HCI, cognitive science, and philosophy of technology, this paper defines the Quasi-Creature and details the Uncanny Valley of Agency. An illustrative mixed-methods study ("Move 78," N=37) of a collaborative creative task reveals a powerful negative correlation between perceived AI efficiency and user frustration, central to the negative experience. This framework robustly explains user frustration with generative AI and has significant implications for the design, ethics, and societal integration of these powerful, alien technologies.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces the Quasi-Creature framework, showing how inconsistent generative AI triggers user frustration and cognitive dissonance.
  • It employs empirical evidence from the Move 78 experiment, including high NASA-TLX scores, to highlight failures in contextual memory and generic outputs.
  • The study underscores design and ethical implications, advocating for transparency and interfaces that clearly communicate AI limitations to users.

The Quasi-Creature and the Uncanny Valley of Agency: A Synthesis of Theory and Evidence on User Interaction with Inconsistent Generative AI

Introduction: The Ontological Paradox of Generative AI

This paper presents a rigorous theoretical and empirical investigation into the paradoxical user experience of contemporary generative AI systems. While these systems demonstrate superhuman fluency and creative capacity, they simultaneously exhibit erratic failures in common sense, consistency, and factual grounding. The authors argue that this frustration is not merely a technical artifact but an ontological phenomenon, arising from the emergence of a new class of technological entity—the "Quasi-Creature." This entity simulates agency and intelligence with unprecedented sophistication but lacks embodiment, environmental interaction, and genuine understanding. The paper introduces the "Uncanny Valley of Agency" as a conceptual framework to explain the precipitous drop in user trust and cognitive comfort when interacting with such entities.

Theoretical Foundations: Paradigm Stress in Human-AI Interaction

The analysis begins by situating generative AI within established HCI paradigms: technology as a predictable tool and as a social actor. The instrumental view, grounded in mental models (Norman, 1988), presupposes that users can form accurate predictions about system behavior. The CASA paradigm (Reeves & Nass, 1996) explains anthropomorphic responses to technology exhibiting social cues. Generative AI disrupts both models, creating "paradigm stress" (Harrison et al., 2007). Its non-deterministic, opaque, and adaptive nature resists stable mental modeling and undermines social expectations, leading to cognitive dissonance and frustration.

Drawing on phenomenology and philosophy of mind, the authors argue that the disembodied nature of generative AI is central to its alienness. The lack of sensory grounding and environmental interaction, as articulated by Brooks (1991) and Clark (1997), precludes the acquisition of tacit, embodied know-how. The failures of generative AI—context loss, hallucinations, and inconsistency—are thus not mere usability flaws but manifestations of deep philosophical limitations (Dreyfus, 1972; Searle, 1980).

Empirical Grounding: The "Move 78" Experiment

The "Move 78" experiment provides empirical support for the theoretical framework. Conducted with 37 participants in a creative collaboration task using a customized RAG-based GenAI system, the paper reveals high levels of user frustration, cognitive load, and negative sentiment. Key findings include:

  • Lack of Contextual Memory: Persistent failures in retaining conversational context forced users into repetitive re-prompting.
  • Generic Outputs and Misinterpretation: The AI frequently produced vague or irrelevant responses, failing to follow instructions.
  • Quantitative Evidence: NASA-TLX scores for mental demand and frustration were exceptionally high (mean frustration = 15.08/20), confirming the cognitive burden.

A critical result is the strong negative correlation (r=0.85r = -0.85) between perceived AI efficiency and user frustration, indicating that inefficiency is the primary driver of negative experience. Additionally, expert users with higher AI familiarity reported greater frustration (r=+0.61r = +0.61), highlighting the "Expert User Expectation Gap."

Social Mediation and the Rupture-and-Repair Cycle

Analysis of group dynamics reveals that dyadic groups exhibited the most negative sentiment but the lowest rate of formal escalation, suggesting that social context mediates frustration. The dyad structure enables internal processing of frustration, reducing the need for external complaint. This supports the notion that user interaction with generative AI is characterized by a rupture-and-repair cycle: breakdowns in AI performance trigger attempts at repair, but the AI's non-deterministic behavior resists resolution, leading to the emergence of the Quasi-Creature percept.

The Quasi-Creature and the Uncanny Valley of Agency: Formalization

The Quasi-Creature is defined as an entity that, through failed repair attempts, occupies a liminal space between tool and partner. The Uncanny Valley of Agency framework maps perceived autonomous agency against user trust and cognitive comfort. The valley is characterized by a sharp decline in comfort when an entity appears highly agentic but is erratically unreliable. This is distinct from Mori's original uncanny valley, which is based on physical appearance; here, the source of uncanniness is cognitive inconsistency and inscrutability.

The failed Theory of Mind (ToM) attempt is central to the expert user frustration. Users spontaneously construct ToM models for the AI, but the AI's behavior violates these models, leading to cognitive dissonance. The Quasi-Creature is thus the object of failed cognitive modeling, and the frustration is a direct consequence of the impossibility of forming a coherent predictive model.

Implications: Design, Ethics, and Societal Integration

The framework has significant implications for AI design, ethics, and societal integration. The pursuit of seamless human simulation in AI design is counterproductive, as it maximizes the cognitive shock of inevitable failures. Instead, interfaces should communicate the AI's alien nature and statistical foundation, making limitations visible and reducing the risk of user dissonance. The integration of Quasi-Creatures into daily life represents a sociotechnical transformation, with technologies actively shaping human behavior and moral decision-making (Latour, 1992; Verbeek, 2005).

The Uncanny Valley of Agency is also a site of ethical and political contestation. The frustration and opacity induced by Quasi-Creatures can be exploited for economic gain, as described by Zuboff (2019). Addressing these risks requires a research agenda that foregrounds transparency, user agency, and ethical design.

Future Research Directions

The paper outlines four trajectories for future research:

  1. Longitudinal Studies: Tracking the evolution of rupture-and-repair cycles and Quasi-Creature perception over time.
  2. Experimental Manipulation: Systematically varying AI agency and inconsistency to identify thresholds for the uncanny valley effect.
  3. Cognitive Repair Strategies: Investigating user strategies for coping with failed ToM modeling.
  4. Design Interventions: Developing interfaces that signal AI limitations and support graceful failure recovery.

Conclusion

This work provides a robust theoretical and empirical account of the unique frustration experienced in human interaction with inconsistent generative AI. By introducing the Quasi-Creature and the Uncanny Valley of Agency, the authors offer a framework that transcends traditional HCI metaphors and addresses the ontological challenges posed by cognitively alien technologies. The implications for design, ethics, and society are profound, necessitating a shift toward transparency and responsible integration of AI systems. Future research should focus on refining these constructs and developing interventions to mitigate the negative effects of Quasi-Creature interaction, ensuring that human-AI collaboration evolves in a manner that is both effective and ethically sound.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Youtube Logo Streamline Icon: https://streamlinehq.com