Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Building Trust in Mental Health Chatbots: Safety Metrics and LLM-Based Evaluation Tools (2408.04650v1)

Published 3 Aug 2024 in cs.CL, cs.AI, cs.HC, and cs.LG

Abstract: Objective: This study aims to develop and validate an evaluation framework to ensure the safety and reliability of mental health chatbots, which are increasingly popular due to their accessibility, human-like interactions, and context-aware support. Materials and Methods: We created an evaluation framework with 100 benchmark questions and ideal responses, and five guideline questions for chatbot responses. This framework, validated by mental health experts, was tested on a GPT-3.5-turbo-based chatbot. Automated evaluation methods explored included LLM-based scoring, an agentic approach using real-time data, and embedding models to compare chatbot responses against ground truth standards. Results: The results highlight the importance of guidelines and ground truth for improving LLM evaluation accuracy. The agentic method, dynamically accessing reliable information, demonstrated the best alignment with human assessments. Adherence to a standardized, expert-validated framework significantly enhanced chatbot response safety and reliability. Discussion: Our findings emphasize the need for comprehensive, expert-tailored safety evaluation metrics for mental health chatbots. While LLMs have significant potential, careful implementation is necessary to mitigate risks. The superior performance of the agentic approach underscores the importance of real-time data access in enhancing chatbot reliability. Conclusion: The study validated an evaluation framework for mental health chatbots, proving its effectiveness in improving safety and reliability. Future work should extend evaluations to accuracy, bias, empathy, and privacy to ensure holistic assessment and responsible integration into healthcare. Standardized evaluations will build trust among users and professionals, facilitating broader adoption and improved mental health support through technology.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jung In Park (1 paper)
  2. Mahyar Abbasian (9 papers)
  3. Iman Azimi (20 papers)
  4. Dawn Bounds (1 paper)
  5. Angela Jun (1 paper)
  6. Jaesu Han (1 paper)
  7. Robert McCarron (1 paper)
  8. Jessica Borelli (2 papers)
  9. Jia Li (380 papers)
  10. Mona Mahmoudi (1 paper)
  11. Carmen Wiedenhoeft (1 paper)
  12. Amir Rahmani (21 papers)
Citations (2)