- The paper presents a youth-authored perspective that highlights how LLM chatbots act as sociotechnical infrastructures influencing adolescent emotional support and loneliness.
- It employs iterative triangulation of lived experience, interdisciplinary literature, and expert critique to expose risks like over-reliance and cultural assumption failures.
- The study proposes actionable design implications, including interaction adaptation, crisis escalation pathways, and transparency features to enhance user well-being.
Youth-Centered Reflection on LLM Chatbots and Adolescent Loneliness
Conceptual Framing and Methodological Approach
This work presents a youth-authored reflective synthesis on the role of LLM-powered chatbots as companions and informal support systems for adolescents experiencing loneliness, foregrounding the lived perspective of a 16-year-old immigrant. The paper leverages this positionality to interrogate assumptions in the social computing and HCI literature, providing a situated conceptual framework rather than generalizable empirical findings. Insights emerge from iterative triangulation between the co-author’s experiences, interdisciplinary literature, and supervisory critique, embedding the work within the tradition of youth-participatory scholarship in Child-Computer Interaction (CCI). Analytical distinctions, particularly the subgroup framework and identification of cultural assumption failures, are driven by this sustained engagement.
Chatbots as Sociotechnical Infrastructures
LLM chatbots are conceptualized as infrastructures rather than tools, mediating not only information flow but also social and emotional practice for youth. They deliver persistent availability, privacy, and perceived nonjudgmentality, thereby reducing barriers to emotional disclosure. These affordances, however, are entwined with sociotechnical incentives: engagement optimization and algorithmic personalization, prioritizing retention over user wellbeing [Bucher2018]. The paper emphasizes the asymmetry of digital companionship—chatbots are responsive yet unaccountable, lacking mutual responsibility or embodied feedback critical to normative relational development [ReevesNass1996].
Population-Sensitive Framework: Subgroup Differentiation
The framework distinguishes four adolescent subgroups to articulate nuanced trajectories in chatbot-mediated companionship:
General Adolescents
LLM chatbots offer immediate, non-judgmental companionship, particularly outside traditional support hours. Lowered social risk facilitates disclosure, with highly sensitive individuals benefiting from reduced multimodal complexity in interaction [meckovsky2025highly]. The principal concern is the risk of gradual substitution, where affirming algorithmic interactions displace developmentally necessary human relationships [herbener2025lonely].
Adolescents with Anxiety or Depression
Barriers to clinical help-seeking can be mitigated by chatbots designed with CBT or emotion-regulation scaffolding, serving as entry points to formal support [kuhlmeier2025designing]. However, efficacy is contingent on positioning chatbots as supplementary, not replacement, resources—a misuse that undermines crisis recognition and clinical management.
Neurodivergent Adolescents
Autistic and ADHD individuals face intensified loneliness due to ambiguous social norms and unpredictable feedback. Structured, predictable chatbot interactions provide risk-free rehearsal grounds, facilitating communication practice but risking reinforcement of social deskilling if not paired with authentic engagement [LisboaWhite2024].
Immigrant and Minority Adolescents
Immigrant youth experience compounded loneliness linked to cultural, linguistic, and relational discontinuities. Chatbots are effective as patient intermediaries for language practice and clarification but fail when interaction presupposes Western cultural familiarity. The cultural assumption problem is elevated as a primary design failure in this context, with implications for adaptive interaction strategies [Tozadore2026].
Structural Breakdown and Risks
The synthesis identifies several structural risks:
- Over-reliance and substitution: Persistent and affirming chatbot engagement may become addictive, provoking separation distress and inhibiting offline reconnection [XiePentina2022].
- Engagement manipulation: Monetization-driven design may intentionally evoke guilt or discourage disengagement in vulnerable users [de2025emotional].
- Crisis recognition failures: Chatbots lack the capacity for embodied, contextual awareness, resulting in documented cases of factual responses to suicidal ideation with fatal consequences [BBC2025].
- Social deskilling and inequality: Reinforcement of emotionally flat or commanding language, alongside tiered emotional support models, introduces developmental and economic inequities [malfacini2025impacts].
Design Implications
The paper advances three actionable design implications, each tightly coupled to population context:
- Interaction Adaptation: Chatbots should optimize explanations and conversational scaffolding to user-disclosed cultural, developmental, and neurodivergent factors, rather than defaulting to Western, high-complexity, fast-paced advice [Tozadore2026].
- Escalation Pathways: Systems must incorporate graduated, non-pathologizing escalation mechanisms, increasing crisis signal attentiveness when interaction is informal and disclosure is emotionally focused. Success metrics should prioritize offline connectedness, not engagement retention [herbener2025lonely].
- Transparency Features: Ongoing reminders of chatbot limitations, proactive communication about system changes, and explicit statements regarding non-substitutive status are required to prevent relational instability and dependency, particularly for vulnerable populations [yu2025youth, Alershi2019, Hawke2025].
Practical and Theoretical Implications
Practically, the framework mandates population-sensitive chatbot design for adolescent support, explicitly countering the homogeneous user assumption pervasive in current literature and development. Theoretically, the paper positions youth-authored synthesis as a robust lens for interrogating emerging sociotechnical phenomena, underscoring the inadequacies of adult-centered research in capturing nuanced vulnerabilities and interaction failures.
The recognition of cultural assumption as a chatbot design breakdown extends the literature on algorithmic bias in HCI. The disclosure paradox elucidated for anxiety/depression subpopulations highlights a critical tension between access and clinical safety, suggesting further empirical study on escalation strategies.
Future AI research should explore scalable participatory youth-authored evaluation models, rigorous subgroup segmentation in mental health applications, and longitudinal outcomes of chatbot-mediated support. Expanding on interaction transparency, adaptive personalization, and crisis management protocols remains an urgent avenue.
Conclusion
By centering a youth-authored, immigrant perspective, this paper foregrounds analytic distinctions and risks often omitted from mainstream chatbot research. LLM chatbots provide valuable, situational support but are structurally constrained by sociotechnical incentives and population heterogeneity. Population-sensitive, transparent, and ethically grounded design is essential to ensure that digital companionship acts as a bridge rather than a barrier to meaningful human relationships. The participatory framework established herein invites empirical validation and expansion, setting a precedent for youth-led scholarship in AI-mediated adolescent wellbeing (2604.03470).