- The paper reveals young people’s perspectives on using genAI chatbots, emphasizing the need to humanize AI while preserving authentic mental health care.
- The paper employs reflexive thematic analysis of diverse co-design workshops to highlight demands for transparency, traceability, and ethical oversight.
- The paper recommends integrated strategies where AI augments clinician support through personalized, youth-centric interactions and secure data management.
Young People's Perspectives on Conversational Generative AI Chatbots in Youth Mental Health
Introduction
The paper "Young people's perceptions and recommendations for conversational generative artificial intelligence in youth mental health" (2604.13381) explores the sociotechnical, ethical, and user experience aspects of deploying genAI chatbots within youth mental health contexts. Through reflexive thematic analysis of co-design workshops, the study interrogates young people's expectations, apprehensions, and design imperatives for repurposing Mia, a genAI chatbot initially tailored to professionals, for direct consumer use among youth within mental health services.
Methodology and Participant Demographics
Thirty-two individuals aged 18–30 years participated in the workshops, with substantial intersectional diversity in gender identity, sexual orientation, cultural background, neurodiversity, chronic illness, and lived experience of mental health issues. A multifaceted methodological approach combined conceptual workshops, live user-testing with iterative prototyping, and post-session surveys, enabling the extraction of rich qualitative insights. Data analysis followed inductive reflexive thematic analysis involving cross-disciplinary lived experience researchers, clinicians, and HCI specialists.
Thematic Findings
Humanising AI Without Dehumanising Care
Young participants articulated a persistent concern regarding the risk of eroding human connection in care settings, emphasising that genAI chatbots should augment, not supplant, clinician-delivered empathy. The perceived inability of chatbots to grasp the contextual, phenomenological intricacies of mental health experiences was foregrounded, alongside apprehensions about reductionist or generic advice. Conversely, participants also highlighted the necessity for sophisticated, youth-centric language modeling, personalized outputs, intersectional sensitivity, and emotionally resonant communication as prerequisites for trust and therapeutic engagement. The tension between technical anthropomorphisation and the irreplaceable value of authentic human relationships reflects established critiques in the literature on healthcare automation and AI-assisted care.
Transparency, Traceability, and Accuracy: "What's Under the Hood"
Participants demanded extensive system transparency, including evidence-base inspection, decision-process explainability, data usage transparency, and rationale for outputs. Traceability—linking outputs to research literature and contextual user data—was perceived as empowering and agency-enabling, yet nuanced by concerns that full disclosure of negative trajectories may trigger disengagement or psychological harm. The prevalent expectation that genAI ought to operate at higher interpretive and recall standards than clinicians, despite algorithm aversion in the face of errors, underscores a significant expectation asymmetry and the role of the therapeutic alliance versus algorithmic objectivity in mental health outcomes.
Appropriate Roles, Service Integration, and Touchpoints
Young people delineated a multifactorial role set for genAI chatbots: navigator (service access support), assessor (iterative user input interpretation and recommendation generation), educator (mental health literacy and treatment explanation), and creator of personalized support and safety plans. Key touchpoints spanned pre-intake self-screening, intake form completion, crisis management, pre-session preparation, and ongoing engagement. Analytical emphasis was placed on service integration strategies that optimize soft entry, intake efficiency, and continuity while avoiding workflow disruption. Participants underscored the value of actionable, individualized recommendations, carefully distinguishing between self-implemented interventions and those requiring clinician involvement. The risk of conflicting recommendations between AI and human clinicians was identified as a critical implementation challenge requiring careful alignment with clinical practice.
Sustained Engagement: Balancing Choice and Safety
Sustained engagement was contingent upon extensive customization options (interaction modality, access point, responsiveness), user-centric granularity in information delivery (opt-in detailed explanations), and technical accessibility. Data privacy and security, alongside interaction safety (real-time risk detection, appropriate escalation protocols), were non-negotiable requirements. Participants advocated for a granular control model wherein raw conversational data remains user-controlled, while AI-generated clinical insights are shareable with clinicians, thus resolving tensions between autonomy and duty-of-care. This aligns with contemporary privacy-preserving design and ethical frameworks in youth mental health.
Ethical and Sociotechnical Implications
The findings substantiate several ethical imperatives: authentic human connection preservation, transparency, proportionality in information disclosure, privacy-autonomy balancing, equitable access, and risk governance. The risk of stratified care—where privileged users may leverage advanced tools while marginalized populations are relegated to technological substitutes—is particularly salient, demanding attention to digital literacy and access. The feedback from participants is not merely usability preference, but reflects deeper ethical obligations rooted in autonomy, informed consent, and welfare protection. The clinical governance of automated risk detection, information sharing, and crisis intervention requires ongoing stakeholder negotiation and ethical reflection.
Practical Recommendations for Design, Development, and Implementation
Concrete recommendations from co-design workshops include:
- Adaptive communication and expectation management: Explicit onboarding to clarify system limitations, transparency in potentially distressing information presentation, and alignment of AI recommendations with clinical protocols.
- Granular privacy mechanisms: Delineation between raw conversational data (user-controlled) and AI-generated insights (clinician-shared); robust security and opt-in controls.
- Technical accessibility and personalization: Multiple modalities, platforms, and interaction structures; tailored information granularity; comprehensive accessibility.
- Service integration and governance: Clinician intervention pathways, multi-stakeholder governance frameworks, and sustained input from lived experience researchers; strategic positioning of genAI as non-disruptive, adjunctive tools.
Limitations
Limitations include the local context of the Australian mental health system, age range (18–30), absence of younger and more marginalized voices, single-system focus (Mia), and short-term prototype evaluation rather than longitudinal deployment.
Conclusion
Young people's nuanced perceptions and recommendations on genAI chatbots in youth mental health encapsulate a sophisticated balancing of technical capability, ethical governance, sociotechnical integration, and individualized engagement. Humanizing AI without dehumanizing care, maximising transparency, careful positioning within care journeys, and reconciling personalized control with safety are foundational for meaningful adoption. The ethical, practical, and theoretical implications underscore that successful deployment of conversational genAI in youth mental health requires ongoing, reflexive engagement with youth stakeholders, meticulous integration protocols, and commitment to equitable, ethical care paradigms.