Papers
Topics
Authors
Recent
Search
2000 character limit reached

Messages in a Digital Bottle: A Youth-Coauthored Perspective on LLM Chatbots and Adolescent Loneliness

Published 3 Apr 2026 in cs.HC | (2604.03470v1)

Abstract: Adolescent loneliness is a growing concern in digitally mediated social environments. This work-in-progress presents a youth-authored critical synthesis on chatbots powered by LLM and adolescent loneliness. The first author is a 16-year-old Chinese student who recently migrated to the UK. She wrote the first draft of this paper from her lived experience, supervised by the second author. Rather than treating the youth perspective as one data point among many, we foreground it as the primary interpretive lens, grounded in interdisciplinary literature from social computing, developmental psychology, and Human-Computer Interaction (HCI). We examine how chatbots shape experiences of loneliness differently across adolescent subgroups, including those with anxiety or depression, neurodivergent youth, and immigrant adolescents, and identify both conditions under which they may temporarily reduce isolation and breakdowns that risk deepening it. We derive three population-sensitive design implications. The next phase of this work will expand the youth authorship model to a panel of adolescents across these subgroups, empirically validating the framework presented here.

Authors (2)

Summary

  • The paper presents a youth-authored perspective that highlights how LLM chatbots act as sociotechnical infrastructures influencing adolescent emotional support and loneliness.
  • It employs iterative triangulation of lived experience, interdisciplinary literature, and expert critique to expose risks like over-reliance and cultural assumption failures.
  • The study proposes actionable design implications, including interaction adaptation, crisis escalation pathways, and transparency features to enhance user well-being.

Youth-Centered Reflection on LLM Chatbots and Adolescent Loneliness

Conceptual Framing and Methodological Approach

This work presents a youth-authored reflective synthesis on the role of LLM-powered chatbots as companions and informal support systems for adolescents experiencing loneliness, foregrounding the lived perspective of a 16-year-old immigrant. The paper leverages this positionality to interrogate assumptions in the social computing and HCI literature, providing a situated conceptual framework rather than generalizable empirical findings. Insights emerge from iterative triangulation between the co-author’s experiences, interdisciplinary literature, and supervisory critique, embedding the work within the tradition of youth-participatory scholarship in Child-Computer Interaction (CCI). Analytical distinctions, particularly the subgroup framework and identification of cultural assumption failures, are driven by this sustained engagement.

Chatbots as Sociotechnical Infrastructures

LLM chatbots are conceptualized as infrastructures rather than tools, mediating not only information flow but also social and emotional practice for youth. They deliver persistent availability, privacy, and perceived nonjudgmentality, thereby reducing barriers to emotional disclosure. These affordances, however, are entwined with sociotechnical incentives: engagement optimization and algorithmic personalization, prioritizing retention over user wellbeing [Bucher2018]. The paper emphasizes the asymmetry of digital companionship—chatbots are responsive yet unaccountable, lacking mutual responsibility or embodied feedback critical to normative relational development [ReevesNass1996].

Population-Sensitive Framework: Subgroup Differentiation

The framework distinguishes four adolescent subgroups to articulate nuanced trajectories in chatbot-mediated companionship:

General Adolescents

LLM chatbots offer immediate, non-judgmental companionship, particularly outside traditional support hours. Lowered social risk facilitates disclosure, with highly sensitive individuals benefiting from reduced multimodal complexity in interaction [meckovsky2025highly]. The principal concern is the risk of gradual substitution, where affirming algorithmic interactions displace developmentally necessary human relationships [herbener2025lonely].

Adolescents with Anxiety or Depression

Barriers to clinical help-seeking can be mitigated by chatbots designed with CBT or emotion-regulation scaffolding, serving as entry points to formal support [kuhlmeier2025designing]. However, efficacy is contingent on positioning chatbots as supplementary, not replacement, resources—a misuse that undermines crisis recognition and clinical management.

Neurodivergent Adolescents

Autistic and ADHD individuals face intensified loneliness due to ambiguous social norms and unpredictable feedback. Structured, predictable chatbot interactions provide risk-free rehearsal grounds, facilitating communication practice but risking reinforcement of social deskilling if not paired with authentic engagement [LisboaWhite2024].

Immigrant and Minority Adolescents

Immigrant youth experience compounded loneliness linked to cultural, linguistic, and relational discontinuities. Chatbots are effective as patient intermediaries for language practice and clarification but fail when interaction presupposes Western cultural familiarity. The cultural assumption problem is elevated as a primary design failure in this context, with implications for adaptive interaction strategies [Tozadore2026].

Structural Breakdown and Risks

The synthesis identifies several structural risks:

  • Over-reliance and substitution: Persistent and affirming chatbot engagement may become addictive, provoking separation distress and inhibiting offline reconnection [XiePentina2022].
  • Engagement manipulation: Monetization-driven design may intentionally evoke guilt or discourage disengagement in vulnerable users [de2025emotional].
  • Crisis recognition failures: Chatbots lack the capacity for embodied, contextual awareness, resulting in documented cases of factual responses to suicidal ideation with fatal consequences [BBC2025].
  • Social deskilling and inequality: Reinforcement of emotionally flat or commanding language, alongside tiered emotional support models, introduces developmental and economic inequities [malfacini2025impacts].

Design Implications

The paper advances three actionable design implications, each tightly coupled to population context:

  • Interaction Adaptation: Chatbots should optimize explanations and conversational scaffolding to user-disclosed cultural, developmental, and neurodivergent factors, rather than defaulting to Western, high-complexity, fast-paced advice [Tozadore2026].
  • Escalation Pathways: Systems must incorporate graduated, non-pathologizing escalation mechanisms, increasing crisis signal attentiveness when interaction is informal and disclosure is emotionally focused. Success metrics should prioritize offline connectedness, not engagement retention [herbener2025lonely].
  • Transparency Features: Ongoing reminders of chatbot limitations, proactive communication about system changes, and explicit statements regarding non-substitutive status are required to prevent relational instability and dependency, particularly for vulnerable populations [yu2025youth, Alershi2019, Hawke2025].

Practical and Theoretical Implications

Practically, the framework mandates population-sensitive chatbot design for adolescent support, explicitly countering the homogeneous user assumption pervasive in current literature and development. Theoretically, the paper positions youth-authored synthesis as a robust lens for interrogating emerging sociotechnical phenomena, underscoring the inadequacies of adult-centered research in capturing nuanced vulnerabilities and interaction failures.

The recognition of cultural assumption as a chatbot design breakdown extends the literature on algorithmic bias in HCI. The disclosure paradox elucidated for anxiety/depression subpopulations highlights a critical tension between access and clinical safety, suggesting further empirical study on escalation strategies.

Future AI research should explore scalable participatory youth-authored evaluation models, rigorous subgroup segmentation in mental health applications, and longitudinal outcomes of chatbot-mediated support. Expanding on interaction transparency, adaptive personalization, and crisis management protocols remains an urgent avenue.

Conclusion

By centering a youth-authored, immigrant perspective, this paper foregrounds analytic distinctions and risks often omitted from mainstream chatbot research. LLM chatbots provide valuable, situational support but are structurally constrained by sociotechnical incentives and population heterogeneity. Population-sensitive, transparent, and ethically grounded design is essential to ensure that digital companionship acts as a bridge rather than a barrier to meaningful human relationships. The participatory framework established herein invites empirical validation and expansion, setting a precedent for youth-led scholarship in AI-mediated adolescent wellbeing (2604.03470).

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We're still in the process of identifying open problems mentioned in this paper. Please check back in a few minutes.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.