Zero-shot Explainable Mental Health Analysis on Social Media by Incorporating Mental Scales (2402.10948v2)
Abstract: Traditional discriminative approaches in mental health analysis are known for their strong capacity but lack interpretability and demand large-scale annotated data. The generative approaches, such as those based on LLMs, have the potential to get rid of heavy annotations and provide explanations but their capabilities still fall short compared to discriminative approaches, and their explanations may be unreliable due to the fact that the generation of explanation is a black-box process. Inspired by the psychological assessment practice of using scales to evaluate mental states, our method which is called Mental Analysis by Incorporating Mental Scales (MAIMS), incorporates two procedures via LLMs. First, the patient completes mental scales, and second, the psychologist interprets the collected information from the mental scales and makes informed decisions. Experimental results show that MAIMS outperforms other zero-shot methods. MAIMS can generate more rigorous explanation based on the outputs of mental scales
- The digital revolution and its impact on mental health care. Psychology and Psychotherapy: Theory, Research and Practice 92, 2 (2019), 277–297.
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1). 4171–4186.
- Socio-economic variations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: results from the WHO World Mental Health (WMH) surveys. Psychological medicine 48, 9 (2018), 1560–1571.
- An Annotated Dataset for Explainable Interpersonal Risk Factors of Mental Disturbance in Social Media Posts. In ACL (Findings). 11960–11969.
- Supervised learning for suicidal ideation detection in online user content. Complexity 2018 (2018).
- MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare. In LREC. 7184–7190.
- Rethinking large language models in mental health applications. arXiv preprint arXiv:2311.11267 (2023).
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. 7871–7880.
- Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
- Inna Pirina and Çağrı Çöltekin. 2018. Identifying depression on reddit: The effect of training data. In Proceedings of the 2018 EMNLP workshop SMM4H: the 3rd social media mining for health applications workshop & shared task. 9–12.
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 21 (2020), 140:1–140:67.
- Detection of Depression-Related Posts in Reddit Social Media Forum. IEEE Access 7 (2019), 44883–44893.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
- Global mental health services and the impact of artificial intelligence–powered large language models. JAMA psychiatry 80, 7 (2023), 662–664.
- Leveraging large language models for mental health prediction via online text data. arXiv preprint arXiv:2307.14385 (2023).
- Mentalllama: Interpretable mental health analysis on social media with large language models. arXiv preprint arXiv:2309.13567 (2023).
- Wenyu Li (19 papers)
- Yinuo Zhu (1 paper)
- Xin Lin (81 papers)
- Ming Li (787 papers)
- Ziyue Jiang (38 papers)
- Ziqian Zeng (32 papers)