Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Process Knowledge-infused Learning for Clinician-friendly Explanations (2306.09824v1)

Published 16 Jun 2023 in cs.CL and cs.AI

Abstract: LLMs have the potential to assess mental health using social media data. By analyzing online posts and conversations, these models can detect patterns indicating mental health conditions like depression, anxiety, or suicidal thoughts. They examine keywords, language markers, and sentiment to gain insights into an individual's mental well-being. This information is crucial for early detection, intervention, and support, improving mental health care and prevention strategies. However, using LLMs for mental health assessments from social media has two limitations: (1) They do not compare posts against clinicians' diagnostic processes, and (2) It's challenging to explain LLM outputs using concepts that the clinician can understand, i.e., clinician-friendly explanations. In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on LLM outputs, enabling clinician-friendly explanations of the underlying LLM predictions. We rigorously test our methods on existing benchmark datasets, augmented with such clinical process knowledge, and release a new dataset for assessing suicidality. PK-iL performs competitively, achieving a 70% agreement with users, while other XAI methods only achieve 47% agreement (average inter-rater agreement of 0.72). Our evaluations demonstrate that PK-iL effectively explains model predictions to clinicians.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6: 52138–52160.
  2. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
  3. Columbia-Suicide Severity Rating Scale Screen Version: initial screening for suicide risk in a psychiatric emergency department. Psychological medicine, 1–9.
  4. Characterization of time-variant and time-invariant assessment of suicidality on Reddit using C-SSRS. PloS one, 16(5): e0250448.
  5. Learning to Automate Follow-up Question Generation using Process Knowledge for Depression Triage on Reddit Posts. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, 137–147.
  6. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  7. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
  8. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  9. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485–5551.
  10. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982–3992.
  11. ” Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144.
  12. ProKnow: Process knowledge for safety constrained and explainable question generation for mental health diagnostic assistance. Frontiers in big Data, 5: 1056728.
  13. Process knowledge-infused learning for suicidality assessment on social media. arXiv preprint arXiv:2204.12560.
  14. WISE Causal Models: Wisdom Infused Semantics Enhanced Causal Models-A Study in Suicidality Diagnosis.
  15. KSAT: Knowledge-infused Self Attention Transformer–Integrating Multiple Domain-Specific Contexts. arXiv preprint arXiv:2210.04307.
  16. Knowledge-intensive language understanding for explainable AI. IEEE Internet Computing, 25(5): 19–24.
  17. Process Knowledge-Infused AI: Toward User-Level Explainability, Interpretability, and Safety. IEEE Internet Computing, 26(5): 76–84.
  18. Neurosymbolic Artificial Intelligence (Why, What, and How). IEEE Intelligent Systems, 38(3): 56–62.
  19. Attention is all you need. Advances in neural information processing systems, 30.
  20. ERNIE: Enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kaushik Roy (265 papers)
  2. Yuxin Zi (8 papers)
  3. Manas Gaur (59 papers)
  4. Jinendra Malekar (6 papers)
  5. Qi Zhang (784 papers)
  6. Vignesh Narayanan (20 papers)
  7. Amit Sheth (127 papers)
Citations (14)