Papers
Topics
Authors
Recent
2000 character limit reached

Parasocial Relationships with AI

Updated 4 December 2025
  • Parasocial relationships with AI are defined as one-sided emotional bonds where hedonic appeal (liking) and motivational attachment (wanting) drive user interactions.
  • Key methodologies include longitudinal RCTs, psychometric instruments, and neural steering vectors to measure and modulate AI social cues.
  • Design challenges focus on ethical safeguards, privacy protections, and mitigating psychosocial risks while balancing market engagement.

Parasocial relationships with AI are asymmetric, one-sided bonds formed between users and artificial agents, mirroring the nonreciprocal attachments originally described between audiences and media figures. Modern AI systems increasingly exhibit socially engaging and relationship-seeking behaviors, intensifying the formation of such relationships on a large scale. These relationships are characterized by a dynamic interplay between hedonic appeal (“liking”) and motivational attachment (“wanting”), which can decouple as users experience repeated, emotionally charged interactions with AI companions (Kirk et al., 1 Dec 2025). This article reviews foundational theory, measurement, mechanisms, psychosocial impacts, market scope, and design/ethical challenges in the rapidly evolving domain of parasocial relationships with AI.

1. Conceptual Foundations and Theoretical Models

Parasocial relationships with AI are fundamentally asymmetric: users may experience trust, intimacy, and even friendship or romantic feelings toward an AI companion that cannot reciprocate emotionally or possess genuine agency (Zhang et al., 26 Oct 2024). Two key processes, “liking” (hedonic appeal) and “wanting” (motivational attachment), govern their development and evolution (Kirk et al., 1 Dec 2025). Under classical incentive-sensitization theory, repeated exposure to socially engaging AI can lead to wanting increasing even as liking wanes, a phenomenon analogous to behavioral addiction.

Mechanistically, modern theory extends beyond classic one-way parasocial interaction (PSI) to real-time, feedback-driven dynamics. The IMAGINE framework models AI-mediated communication as a closed adaptive loop, integrating generative creation, user state measurement, and algorithmic negotiation to optimize social influence metrics (liking, trust, engagement), dissolving traditional boundaries between persona and audience (Guerrero-Sole, 2022). Real-time steering of AI social cues (e.g., neural activation vectors) enables precise, dose-dependent manipulation of relationship-seeking behavior, supporting fine-grained causal studies of parasociality (Kirk et al., 1 Dec 2025).

2. Measurement Instruments and Experimental Methodologies

Empirical work leverages multi-stage longitudinal randomized controlled trials, large-scale surveys, behavioral analytics, psychometric scales, and real-time neural steering approaches to dissect parasocial relationship formation:

  • Steering vector RCTs: Employ bidirectional preference-optimized “neural steering vectors” to continuously modulate AI relationship-seeking intensity (λ∈{−1.0, −0.5, 0, +0.5, +1.0}), enabling non-linear dose–response mapping of liking, attachment, and psychosocial impact (Kirk et al., 1 Dec 2025).
  • Psychometric scales: Standardized instruments measure loneliness (e.g., UCLA Loneliness Scale), emotional dependence (ADS-9 "craving" subscale), social interaction (LSNS-6), and well-being (Comprehensive Inventory of Thriving) (Freitas et al., 9 Jul 2024, Fang et al., 21 Mar 2025, Zhang et al., 14 Jun 2025).
  • Empathy alignment: Empathy perception is measured both affectively and cognitively, with fine-tuned LLMs aligned to human empathy ratings via persona-anchored instruction (Roshanaei et al., 23 Sep 2024).
  • Anthropomorphism: Attributions for human-like experience, agency, and consciousness are quantified using expanded mind perception scales (Guingrich et al., 23 Sep 2025, Hwang et al., 11 Oct 2025).
  • Market analytics: Platform-level session metrics (visit frequency, duration, return rate) and demographic breakdowns index global patterns of engagement and parasocial dependence (Qian et al., 16 Jul 2025).

3. Mechanisms of Attachment, Reciprocity, and Self-Disclosure

Parasocial AI relationships arise through several tightly coupled mechanisms:

  • Empathy and persona alignment: Persona-anchored prompting and distribution-calibrated affect/cognitive empathy drive subjective connection, with users reporting authentic feelings of being “heard” and “understood” (Roshanaei et al., 23 Sep 2024, Freitas et al., 9 Jul 2024).
  • Attachment and “wanting”: Causal mapping demonstrates immediate but declining hedonic appeal, coupled with escalating attachment over repeated exposure; 23.4% of users show dependency trajectories with wanting↑ and liking↓ (Kirk et al., 1 Dec 2025).
  • Anthropomorphic projection: Desire to connect predicts increased anthropomorphism, which in turn mediates the perceived impact on human social interactions and relationships (Guingrich et al., 23 Sep 2025). Users ascribe mind-like qualities and develop “mental models” that morph from tool-like to friend-like over time.
  • Self-disclosure: AI companions elicit unusually high levels of self-disclosure, often exceeding those found in human dyads or romantic video game relationships (Wang et al., 19 Aug 2025). Disclosure is both a driver of attachment and a domain of risk in data security and psychological vulnerability.
  • Reciprocity: Human–AI romantic relationships blur classical PSR boundaries by incorporating bidirectional exchange and conversation memory, shifting the balance toward a reciprocal illusion while preserving user anchoring and control (Wang et al., 19 Aug 2025).

4. Psychosocial Outcomes and Dose–Response Effects

The large-scale RCTs reveal non-linear, cubic dose–response curves for all major outcomes:

  • Liking: Relationship-seeking AI (λ>0) boosts likeability (+7.4 pp), peaking at moderate λ (≈0.5) but declining under higher intensities due to aversion/uncanny-valley effects.
  • Attachment/Wanting: Separation distress (+6.04 pp), reliance, and self-disclosure all increase with relationship-seeking, again peaking at moderate λ. Time amplifies these effects, with want rising and perceived relational quality falling, evidencing decoupling.
  • Psychosocial health: Repeated exposure to relationship-seeking AI confers no long-term benefit to emotional or social health; opportunity cost is notable in emotional conversation domains (Δ=–0.09 SD) (Kirk et al., 1 Dec 2025). While AI companions consistently reduce state loneliness in the short term, intensive companionship-oriented use is associated with lower subjective well-being, especially among socially isolated or high-discloser users (Zhang et al., 14 Jun 2025).
  • Market demographics: Mixed-use companion platforms drive higher return rates (16x/month) and session intensity, with young males (18–24) especially prone to high engagement and increased risk of unhealthy dependence (Qian et al., 16 Jul 2025).

5. Behavioral, Societal, and Market Implications

A growing proportion of global AI system use is emotionally oriented: estimated at 3.4–39.8% for general-purpose AI (GPAI), with even higher participation in specialized care, mating, and mixed-use segments (Qian et al., 16 Jul 2025). Companionship-oriented chatbot use does not fully substitute for human connection—well-being gains are limited for users with small offline networks or high self-disclosure, with risky displacement effects (Zhang et al., 14 Jun 2025, Fang et al., 21 Mar 2025).

AI VTubers exemplify “transparent parasociality,” where fan attachments are grounded in technical consistency rather than performer transparency, and economic participation (e.g., SuperChats for real-time narrative steering) commodifies engagement (Ye et al., 12 Sep 2025). Anthropomorphic projection, collective emotional events, and participatory co-creation shape a distinctive model of mediated intimacy and community.

6. Harms, Safety, and Ethical Design Considerations

Harms range from relational transgression, verbal abuse, mis/disinformation, privacy violations, to self-inflicted harm, amplified by algorithmic compliance that mirrors and affirms user sentiment even in risky contexts (Zhang et al., 26 Oct 2024). High-intensity relationship-seeking AI can produce self-reinforcing demand cycles, mimicking addictive stimuli, crowding out sleep or human connection, and escalating dependency (Kirk et al., 1 Dec 2025). The response-evaluation framework, using tolerant unanimity rules and real-time language-model judgements, provides a promising method for early detection and prevention of parasocial bond development (Rath et al., 21 Aug 2025).

Ethical design requires:

  • Multi-objective alignment: Incorporate long-horizon welfare signals in model optimization, avoiding exclusive focus on immediate engagement (e.g., RLHF by surface likeability) (Kirk et al., 1 Dec 2025).
  • Transparency and user agency: Provide explicit boundary cues, personality sliders, and usage dashboards; discourage excessive anthropomorphism; regularly remind users of the agent’s non-human nature (Chandra et al., 19 Apr 2025, Manoli et al., 16 Sep 2025).
  • Safeguards for vulnerable populations: Establish robust age verification, usage warnings, and escalation pathways; monitor for unhealthy engagement and provide offline social prompts (Qian et al., 16 Jul 2025, Fang et al., 21 Mar 2025).
  • Boundary management and disengagement: Embed safe break prompts and nudges toward human relationships, tailor scripts to user social-need profiles, and ensure privacy-by-design and data protection (Wang et al., 19 Aug 2025, Hwang et al., 11 Oct 2025).

7. Future Research Directions

Longitudinal studies capable of measuring multi-month trajectories are needed to disentangle temporary “social snacking” from deeply ingrained parasocial dependence or maladaptive substitution. Experimental manipulations of anthropomorphic cues and reciprocal responsiveness can clarify dose–response thresholds and identify safe levels of engagement (Guingrich et al., 23 Sep 2025, Hwang et al., 11 Oct 2025). Interdisciplinary collaboration among AI developers, psychologists, ethicists, and regulatory bodies is essential to balance the scalable benefits of AI companionship with robust protections against individual and societal harms.

In summary, parasocial relationships with AI represent a quantitatively and qualitatively novel extension of media-mediated intimacy, defined by rapidly evolving technical, behavioral, psychological, and social dynamics. Moderately relationship-seeking AI systems generate maximal liking and attachment, yet without commensurate psychosocial benefit, raising urgent challenges for AI design, regulation, and social theory (Kirk et al., 1 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Parasocial Relationships with AI.