Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can Large Language Models be Used to Provide Psychological Counselling? An Analysis of GPT-4-Generated Responses Using Role-play Dialogues (2402.12738v1)

Published 20 Feb 2024 in cs.CL, cs.AI, and cs.HC

Abstract: Mental health care poses an increasingly serious challenge to modern societies. In this context, there has been a surge in research that utilizes information technologies to address mental health problems, including those aiming to develop counseling dialogue systems. However, there is a need for more evaluations of the performance of counseling dialogue systems that use LLMs. For this study, we collected counseling dialogue data via role-playing scenarios involving expert counselors, and the utterances were annotated with the intentions of the counselors. To determine the feasibility of a dialogue system in real-world counseling scenarios, third-party counselors evaluated the appropriateness of responses from human counselors and those generated by GPT-4 in identical contexts in role-play dialogue data. Analysis of the evaluation results showed that the responses generated by GPT-4 were competitive with those of human counselors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (16)
  1. Methods in predictive techniques for mental health status on social media: a critical review. NPJ digital medicine, Vol. 3, No. 1, p. 43, 2020.
  2. PAL: Persona-augmented emotional support conversation generation. In Findings of the Association for Computational Linguistics: ACL 2023, pp. 535–554, 2023.
  3. Toxicity in chatgpt: Analyzing persona-assigned language models, 2023.
  4. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. JMIR mental health, Vol. 4, No. 2, p. e7785, 2017.
  5. Characterisation of mental health conditions in social media using informed deep learning. Scientific reports, Vol. 7, No. 1, p. 45141, 2017.
  6. Supervised learning for suicidal ideation detection in online user content. Complexity, Vol. 2018, , 2018.
  7. Certifying llm safety against adversarial prompting, 2023.
  8. Chatcounselor: A large language models for mental health support, 2023.
  9. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3469–3483, 2021.
  10. Ministry of Health, Labour and Welfare of Japan. White paper on suicide prevention, 2020.
  11. Improving the generalizability of depression detection by leveraging clinical questionnaires. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8446–8459, 2022.
  12. World Health Organization. World mental health report: Transforming mental health for all, 2022.
  13. Mental health disorder identification from motivational conversations. IEEE Transactions on Computational Social Systems, 2022.
  14. Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. WWW ’21, p. 194–205. Association for Computing Machinery, 2021.
  15. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, Vol. 35, pp. 24824–24837, 2022.
  16. Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Michimasa Inaba (7 papers)
  2. Mariko Ukiyo (3 papers)
  3. Keiko Takamizo (3 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com