Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Zero-shot Explainable Mental Health Analysis on Social Media by Incorporating Mental Scales (2402.10948v2)

Published 9 Feb 2024 in cs.CL and cs.AI

Abstract: Traditional discriminative approaches in mental health analysis are known for their strong capacity but lack interpretability and demand large-scale annotated data. The generative approaches, such as those based on LLMs, have the potential to get rid of heavy annotations and provide explanations but their capabilities still fall short compared to discriminative approaches, and their explanations may be unreliable due to the fact that the generation of explanation is a black-box process. Inspired by the psychological assessment practice of using scales to evaluate mental states, our method which is called Mental Analysis by Incorporating Mental Scales (MAIMS), incorporates two procedures via LLMs. First, the patient completes mental scales, and second, the psychologist interprets the collected information from the mental scales and makes informed decisions. Experimental results show that MAIMS outperforms other zero-shot methods. MAIMS can generate more rigorous explanation based on the outputs of mental scales

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. The digital revolution and its impact on mental health care. Psychology and Psychotherapy: Theory, Research and Practice 92, 2 (2019), 277–297.
  2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT (1). 4171–4186.
  3. Socio-economic variations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: results from the WHO World Mental Health (WMH) surveys. Psychological medicine 48, 9 (2018), 1560–1571.
  4. An Annotated Dataset for Explainable Interpersonal Risk Factors of Mental Disturbance in Social Media Posts. In ACL (Findings). 11960–11969.
  5. Supervised learning for suicidal ideation detection in online user content. Complexity 2018 (2018).
  6. MentalBERT: Publicly Available Pretrained Language Models for Mental Healthcare. In LREC. 7184–7190.
  7. Rethinking large language models in mental health applications. arXiv preprint arXiv:2311.11267 (2023).
  8. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In ACL. 7871–7880.
  9. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
  10. Inna Pirina and Çağrı Çöltekin. 2018. Identifying depression on reddit: The effect of training data. In Proceedings of the 2018 EMNLP workshop SMM4H: the 3rd social media mining for health applications workshop & shared task. 9–12.
  11. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 21 (2020), 140:1–140:67.
  12. Detection of Depression-Related Posts in Reddit Social Media Forum. IEEE Access 7 (2019), 44883–44893.
  13. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023).
  14. Global mental health services and the impact of artificial intelligence–powered large language models. JAMA psychiatry 80, 7 (2023), 662–664.
  15. Leveraging large language models for mental health prediction via online text data. arXiv preprint arXiv:2307.14385 (2023).
  16. Mentalllama: Interpretable mental health analysis on social media with large language models. arXiv preprint arXiv:2309.13567 (2023).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Wenyu Li (19 papers)
  2. Yinuo Zhu (1 paper)
  3. Xin Lin (81 papers)
  4. Ming Li (787 papers)
  5. Ziyue Jiang (38 papers)
  6. Ziqian Zeng (32 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com