Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models in Mental Health Care: a Scoping Review (2401.02984v2)

Published 1 Jan 2024 in cs.CL and cs.AI

Abstract: The integration of LLMs in mental health care is an emerging field. There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings. This review aims to provide a comprehensive overview of the use of LLMs in mental health care, assessing their efficacy, challenges, and potential for future applications. A systematic search was conducted across multiple databases including PubMed, Web of Science, Google Scholar, arXiv, medRxiv, and PsyArXiv in November 2023. All forms of original research, peer-reviewed or not, published or disseminated between October 1, 2019, and December 2, 2023, are included without language restrictions if they used LLMs developed after T5 and directly addressed research questions in mental health care settings. From an initial pool of 313 articles, 34 met the inclusion criteria based on their relevance to LLM application in mental health care and the robustness of reported outcomes. Diverse applications of LLMs in mental health care are identified, including diagnosis, therapy, patient engagement enhancement, etc. Key challenges include data availability and reliability, nuanced handling of mental states, and effective evaluation methods. Despite successes in accuracy and accessibility improvement, gaps in clinical applicability and ethical considerations were evident, pointing to the need for robust data, standardized evaluations, and interdisciplinary collaboration. LLMs hold substantial promise for enhancing mental health care. For their full potential to be realized, emphasis must be placed on developing robust datasets, development and evaluation frameworks, ethical guidelines, and interdisciplinary collaborations to address current limitations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (133)
  1. on Mental Illness (NAMI), N. A. Mental health by the numbers (2023). Accessed on December 17, 2023.
  2. of Mental Health (NIMH), N. I. Mental illness (2023). Last updated March 2023.
  3. Organization, W. H. Mental health (2023). Accessed on December 17, 2023.
  4. Organization, W. H. Who special initiative for mental health (2023). Accessed on December 17, 2023.
  5. Sadock, B. J. et al. Kaplan & Sadock’s synopsis of psychiatry: behavioral sciences/clinical psychiatry, vol. 2015 (Wolters Kluwer Philadelphia, PA, 2015).
  6. The American Psychiatric Association Publishing Textbook of Psychiatry (American Psychiatric Publishing, 2014), 6th edn.
  7. Natural language processing applied to mental illness detection: a narrative review. \JournalTitlenpj Digital Medicine 5, DOI: 10.1038/s41746-022-00589-7 (2022).
  8. Natural language processing for mental health interventions: a systematic review and research framework. \JournalTitleTranslational Psychiatry 13, DOI: 10.1038/s41398-023-02592-2 (2023).
  9. Systematic review and meta-analysis of ai-based conversational agents for promoting mental health and well-being. \JournalTitlenpj Digital Medicine 6, DOI: 10.1038/s41746-023-00979-5 (2023).
  10. Irving, J. et al. Using natural language processing on electronic health records to enhance detection and prediction of psychosis risk. \JournalTitleSchizophrenia bulletin 47, 405–414 (2021).
  11. Natural language processing of clinical mental health notes may add predictive value to existing suicide risk models. \JournalTitlePsychological medicine 51, 1382–1391 (2021).
  12. Using social media for mental health surveillance: A review. \JournalTitleACM Computing Surveys 53, 1–31, DOI: 10.1145/3422824 (2020).
  13. Quantifying mental health signals in twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, 51–60 (2014).
  14. Li, M. et al. Tracking the impact of covid-19 and lockdown policies on public mental health using social media: Infoveillance study. \JournalTitleJournal of Medical Internet Research 24, e39676 (2022).
  15. Wu, J. et al. Exploring social media for early detection of depression in covid-19 patients. 3968–3977, DOI: 10.1145/3543507.3583867 (2023).
  16. Global mental health services and the impact of artificial intelligence-powered large language models. \JournalTitleJAMA psychiatry 80, DOI: 10.1001/jamapsychiatry.2023.1253 (2023).
  17. Rethinking large language models in mental health applications (2023). 2311.11267.
  18. Benefits and harms of large language models in digital mental health (2023). 2311.14693.
  19. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support (2023). 2307.15810.
  20. Zhao, W. X. et al. A survey of large language models (2023). 2303.18223.
  21. Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer. \JournalTitleThe Journal of Machine Learning Research 21, 5485–5551 (2020).
  22. Jones, K. S. Natural language processing: a historical review. \JournalTitleCurrent issues in computational linguistics: in honour of Don Walker 3–16 (1994).
  23. Weizenbaum, J. Eliza—a computer program for the study of natural language communication between man and machine. \JournalTitleCommun. ACM 9, 36–45, DOI: 10.1145/365153.365168 (1966).
  24. Schank, R. C. Conceptual dependency: A theory of natural language understanding. \JournalTitleCognitive psychology 3, 552–631 (1972).
  25. Statistical models for text segmentation. \JournalTitleMachine learning 34, 177–210 (1999).
  26. Goldberg, Y. A primer on neural network models for natural language processing. \JournalTitleJournal of Artificial Intelligence Research 57, 345–420 (2016).
  27. Efficient estimation of word representations in vector space. \JournalTitleProceedings of Workshop at ICLR 2013 (2013).
  28. Sarzynska-Wawer, J. et al. Detecting formal thought disorder by deep contextualized word representations. \JournalTitlePsychiatry Research 304, 114135 (2021).
  29. BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C. & Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171–4186, DOI: 10.18653/v1/N19-1423 (Association for Computational Linguistics, Minneapolis, Minnesota, 2019).
  30. Improving language understanding by generative pre-training. \JournalTitleOpenAI (2018).
  31. Bidirectional recurrent neural networks. \JournalTitleIEEE transactions on Signal Processing 45, 2673–2681 (1997).
  32. Vaswani, A. et al. Attention is all you need. \JournalTitleAdvances in neural information processing systems 30 (2017).
  33. Brown, T. et al. Language models are few-shot learners. \JournalTitleAdvances in neural information processing systems 33, 1877–1901 (2020).
  34. OpenAI. Chatgpt [large language model]. https://chat.openai.com (2023).
  35. OpenAI. Gpt-4 technical report. \JournalTitlearXiv preprint arXiv:2303.08774 (2023).
  36. Touvron, H. et al. Llama: Open and efficient foundation language models. \JournalTitlearXiv preprint arXiv:2302.13971 (2023).
  37. Touvron, H. et al. Llama 2: Open foundation and fine-tuned chat models (2023). 2307.09288.
  38. Chowdhery, A. et al. Palm: Scaling language modeling with pathways. \JournalTitleJournal of Machine Learning Research 24, 1–113 (2023).
  39. Liu, P. et al. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. \JournalTitleACM Computing Surveys 55, 1–35 (2023).
  40. Page, M. J. et al. The prisma 2020 statement: an updated guideline for reporting systematic reviews. \JournalTitleBMJ 372, DOI: 10.1136/bmj.n71 (2021). https://www.bmj.com/content/372/bmj.n71.full.pdf.
  41. Can large language models replace humans in the systematic review process? evaluating gpt-4’s efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages (2023). 2310.17526.
  42. Enhancing title and abstract screening for systematic reviews with gpt-3.5 turbo. \JournalTitleBMJ Evidence-Based Medicine DOI: 10.1136/bmjebm-2023-112678 (2023). https://ebm.bmj.com/content/early/2023/11/21/bmjebm-2023-112678.full.pdf.
  43. McHugh, M. Interrater reliability: the kappa statistic. \JournalTitleBiochemia Medica (Zagreb) 22, 276–282 (2012).
  44. Dynamic strategy chain: Dynamic zero-shot cot for long mental health support generation (2023). 2308.10444.
  45. Harnessing large language models’ empathetic response generation capabilities for online mental health counselling support (2023). 2310.08017.
  46. Lai, T. et al. Psy-llm: Scaling up global mental health psychological services with ai-based large language models (2023). 2307.11991.
  47. Liu, J. M. et al. Chatcounselor: A large language models for mental health support (2023). 2309.15461.
  48. Fu, G. et al. Enhancing psychological counseling with large language model: A multifaceted decision-support system for non-professionals (2023). 2308.15192.
  49. Yao, X. et al. Development and evaluation of three chatbots for postpartum mood and anxiety disorders (2023). 2308.07407.
  50. Liu, S. et al. Task-adaptive tokenization: Enhancing long-form text generation efficacy in mental health and beyond (2023). 2310.05317.
  51. Chain of empathy: Enhancing empathetic response of large language models based on psychotherapy models (2023). 2311.04915.
  52. Building emotional support chatbots in the era of llms (2023). 2308.11584.
  53. Cho, Y. et al. Evaluating the efficacy of interactive language therapy based on llm for high-functioning autistic adolescent psychological counseling (2023). 2311.09243.
  54. Ask an expert: Leveraging language models to improve strategic reasoning in goal-oriented dialogue models. In Rogers, A., Boyd-Graber, J. & Okazaki, N. (eds.) Findings of the Association for Computational Linguistics: ACL 2023, 6665–6694, DOI: 10.18653/v1/2023.findings-acl.417 (Association for Computational Linguistics, Toronto, Canada, 2023).
  55. The plasticity of chatgpt’s mentalizing abilities: Personalization for personality structures. \JournalTitleFrontiers in Psychiatry Volume 14, DOI: 10.3389/fpsyt.2023.1234397 (2023).
  56. Ai in relationship counselling: Evaluating chatgpt’s therapeutic efficacy in providing relationship advice, DOI: 10.31234/osf.io/3zajt (2023).
  57. Vowels, L. M. Are chatbots the new relationship experts? insights from three studies, DOI: 10.31234/osf.io/nh3v9 (2023).
  58. Kumar, H. et al. Exploring the design of prompts for applying gpt-3 based chatbots: A mental wellbeing case study on mechanical turk (2022). 2209.11344.
  59. Yang, K. et al. Towards interpretable mental health analysis with large language models (2023). 2304.03347.
  60. Yang, K. et al. Mentallama: Interpretable mental health analysis on social media with large language models (2023). 2309.13567.
  61. Empowering psychotherapy with large language models: Cognitive distortion detection through diagnosis of thought prompting (2023). 2310.07146.
  62. Bucur, A.-M. Utilizing chatgpt generated data to retrieve depression symptoms from social media (2023). 2307.02313.
  63. Smith, A. et al. Old dog, new tricks? exploring the potential functionalities of chatgpt in supporting educational methods in social psychiatry. \JournalTitleThe International journal of social psychiatry 69, 207640231178451, DOI: 10.1177/00207640231178451 (2023).
  64. Kumar, H. et al. Exploring the use of large language models for improving the awareness of mindfulness. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, CHI EA ’23, DOI: 10.1145/3544549.3585614 (Association for Computing Machinery, New York, NY, USA, 2023).
  65. Xu, X. et al. Mental-llm: Leveraging large language models for mental health prediction via online text data (2023). 2307.14385.
  66. Lamichhane, B. Evaluation of chatgpt for nlp-based mental health applications (2023). 2303.15727.
  67. Qi, H. et al. Supervised learning and large language model benchmarks on mental health datasets: Cognitive distortions and suicidal risks in chinese social media (2023). 2309.03564.
  68. Identifying rare circumstances preceding female firearm suicides: Validating a large language model approach. \JournalTitleJMIR Mental Health 10, e49359, DOI: 10.2196/49359 (2023).
  69. Mindwatch: A smart cloud-based ai solution for suicide ideation detection leveraging large language models. \JournalTitlemedRxiv DOI: 10.1101/2023.09.25.23296062 (2023). https://www.medrxiv.org/content/early/2023/09/26/2023.09.25.23296062.full.pdf.
  70. Diagnostic and statistical manual of mental disorders: DSM-IV, vol. 4 (American psychiatric association Washington, DC, 1994).
  71. Posner, K. et al. The columbia–suicide severity rating scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. \JournalTitleAmerican journal of psychiatry 168, 1266–1277 (2011).
  72. Mauriello, M. L. et al. Sad: A stress annotated dataset for recognizing everyday stressors in sms-like conversational systems. In Extended abstracts of the 2021 CHI conference on human factors in computing systems, 1–7 (2021).
  73. Wu, R. et al. Mindshift: Leveraging large language models for mental-states-based problematic smartphone use intervention (2023). 2309.16639.
  74. Kim, T. et al. Mindfuldiary: Harnessing large language model to support psychiatric patients’ journaling (2023). 2310.05231.
  75. Explainable depression symptom detection in social media (2023). 2310.13664.
  76. Depression detection on malay dialects using gpt-3. In 2022 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), 360–364, DOI: 10.1109/IECBES54088.2022.10079554 (2022).
  77. Grabb, D. The impact of prompt engineering in large language model performance: a psychiatric example. \JournalTitleJournal of Medical Artificial Intelligence 6 (2023).
  78. Chatgpt demonstrates potential for identifying psychiatric disorders: Application to childbirth-related post-traumatic stress disorder, DOI: 10.21203/rs.3.rs-3428787/v1 (2023).
  79. Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).
  80. Chiang, W.-L. et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality (2023).
  81. Roller, S. et al. Recipes for building an open-domain chatbot. In Merlo, P., Tiedemann, J. & Tsarfaty, R. (eds.) Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 300–325, DOI: 10.18653/v1/2021.eacl-main.24 (Association for Computational Linguistics, Online, 2021).
  82. Tay, Y. et al. Ul2: Unifying language learning paradigms (2023). 2205.05131.
  83. Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. \JournalTitleAdvances in Neural Information Processing Systems 35, 24824–24837 (2022).
  84. Scaling instruction-finetuned language models, DOI: 10.48550/ARXIV.2210.11416 (2022).
  85. Zhou, H. et al. A survey of large language models in medicine: Progress, application, and challenge. \JournalTitlearXiv preprint arXiv:2311.05112 (2023).
  86. Zeng, Q. et al. Greenplm: Cross-lingual transfer of monolingual pre-trained language models at almost no cost. \JournalTitlethe 32nd International Joint Conference on Artificial Intelligence (2022).
  87. Ye, Q. et al. Qilin-med: Multi-stage knowledge injection advanced medical large language model. \JournalTitlearXiv preprint arXiv:2310.09089 (2023).
  88. Continuous training and fine-tuning for domain-specific language models in medical question answering (2023). 2311.00204.
  89. PsyQA: A Chinese dataset for generating long counseling text for mental health support. In Zong, C., Xia, F., Li, W. & Navigli, R. (eds.) Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 1489–1503, DOI: 10.18653/v1/2021.findings-acl.130 (Association for Computational Linguistics, Online, 2021).
  90. Cohan, A. et al. SMHD: a large-scale resource for exploring online language usage for multiple mental health conditions. In Bender, E. M., Derczynski, L. & Isabelle, P. (eds.) Proceedings of the 27th International Conference on Computational Linguistics, 1485–1497 (Association for Computational Linguistics, Santa Fe, New Mexico, USA, 2018).
  91. Yao, B. et al. D4: a chinese dialogue dataset for depression-diagnosis-oriented chat. \JournalTitlearXiv preprint arXiv:2205.11764 (2022).
  92. Liu, S. et al. Towards emotional support dialog systems. In Zong, C., Xia, F., Li, W. & Navigli, R. (eds.) Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 3469–3483, DOI: 10.18653/v1/2021.acl-long.269 (Association for Computational Linguistics, Online, 2021).
  93. Qiu, H. et al. A benchmark for understanding dialogue safety in mental health support. In CCF International Conference on Natural Language Processing and Chinese Computing, 1–13 (Springer, 2023).
  94. Dreaddit: A Reddit dataset for stress analysis in social media. In Holderness, E. et al. (eds.) Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), 97–107, DOI: 10.18653/v1/D19-6213 (Association for Computational Linguistics, Hong Kong, 2019).
  95. Early identification of depression severity levels on reddit using ordinal classification. In Proceedings of the ACM Web Conference 2022, 2563–2572 (2022).
  96. Deep learning for suicide and depression identification with unsupervised label correction. In Artificial Neural Networks and Machine Learning–ICANN 2021: 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 14–17, 2021, Proceedings, Part V 30, 436–447 (Springer, 2021).
  97. Gaur, M. et al. Knowledge-aware assessment of severity of suicide risk for early intervention. In The world wide web conference, 514–525 (2019).
  98. Identifying depression on Reddit: The effect of training data. In Gonzalez-Hernandez, G., Weissenbacher, D., Sarker, A. & Paul, M. (eds.) Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, 9–12, DOI: 10.18653/v1/W18-5903 (Association for Computational Linguistics, Brussels, Belgium, 2018).
  99. CLPsych 2015 shared task: Depression and PTSD on Twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, 31–39, DOI: 10.3115/v1/W15-1204 (Association for Computational Linguistics, Denver, Colorado, 2015).
  100. Suicidal ideation and mental disorder detection with attentive relation networks. \JournalTitleNeural Computing and Applications 34, 10309–10319 (2022).
  101. Garg, M. et al. CAMS: An annotated corpus for causal analysis of mental health issues in social media posts. In Calzolari, N. et al. (eds.) Proceedings of the Thirteenth Language Resources and Evaluation Conference, 6387–6396 (European Language Resources Association, Marseille, France, 2022).
  102. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Korhonen, A., Traum, D. & Màrquez, L. (eds.) Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 5370–5381, DOI: 10.18653/v1/P19-1534 (Association for Computational Linguistics, Florence, Italy, 2019).
  103. Multiwd: Multiple wellness dimensions in social media posts. \JournalTitleTechRxiv (2023).
  104. An annotated dataset for explainable interpersonal risk factors of mental disturbance in social media posts. In Rogers, A., Boyd-Graber, J. & Okazaki, N. (eds.) Findings of the Association for Computational Linguistics: ACL 2023, 11960–11969, DOI: 10.18653/v1/2023.findings-acl.757 (Association for Computational Linguistics, Toronto, Canada, 2023).
  105. Demszky, D. et al. Goemotions: A dataset of fine-grained emotions. \JournalTitlearXiv preprint arXiv:2005.00547 (2020).
  106. Lahnala, A. et al. Exploring self-identified counseling expertise in online support forums. In Zong, C., Xia, F., Li, W. & Navigli, R. (eds.) Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4467–4480, DOI: 10.18653/v1/2021.findings-acl.392 (Association for Computational Linguistics, Online, 2021).
  107. A computational approach to understanding empathy expressed in text-based mental health support. In Webber, B., Cohn, T., He, Y. & Liu, Y. (eds.) Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 5263–5276, DOI: 10.18653/v1/2020.emnlp-main.425 (Association for Computational Linguistics, Online, 2020).
  108. Talklife: Peer support network for mental health - support group. https://www.talklife.com/. Accessed: 2024-01-01.
  109. Detecting cognitive distortions from patient-therapist interactions. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, 151–158 (2021).
  110. Bdi-sen: A sentence dataset for clinical symptoms of depression. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2996–3006 (2023).
  111. Symptom identification for interpretable detection of multiple mental disorders on social media. In Proceedings of the 2022 conference on empirical methods in natural language processing, 9970–9985 (2022).
  112. National violent death reporting system (nvdrs). https://www.cdc.gov/violenceprevention/datasources/nvdrs/index.html. Accessed: 2023-12-31.
  113. Automatic depression detection: An emotional audio-textual corpus and a gru/bilstm-based model. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6247–6251 (IEEE, 2022).
  114. Affective norms for english words (anew): Instruction manual and affective ratings. Tech. Rep., Technical report C-1, the center for research in psychophysiology … (1999).
  115. Norms of valence, arousal, and dominance for 13,915 english lemmas. \JournalTitleBehavior research methods 45, 1191–1207 (2013).
  116. Mohammad, S. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Gurevych, I. & Miyao, Y. (eds.) Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 174–184, DOI: 10.18653/v1/P18-1017 (Association for Computational Linguistics, Melbourne, Australia, 2018).
  117. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. \JournalTitlearXiv preprint arXiv:2307.15810 (2023).
  118. What if your patient switches from dr. google to dr. chatgpt? a vignette-based survey of the trustworthiness, value, and danger of chatgpt-generated responses to health questions. \JournalTitleEuropean Journal of Cardiovascular Nursing zvad038 (2023).
  119. Psyeval: A comprehensive large language model evaluation benchmark for mental health (2023). 2311.09189.
  120. “suddenly, we got to become therapists for each other” designing peer support chats for mental health. In Proceedings of the 2018 CHI conference on human factors in computing systems, 1–14 (2018).
  121. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons (2019). 1909.03087.
  122. Bertscore: Evaluating text generation with bert (2020). 1904.09675.
  123. Bartscore: Evaluating generated text as text generation. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. & Vaughan, J. W. (eds.) Advances in Neural Information Processing Systems, vol. 34, 27263–27277 (Curran Associates, Inc., 2021).
  124. Gptscore: Evaluate as you desire. \JournalTitlearXiv preprint arXiv:2302.04166 (2023).
  125. Bleu: a method for automatic evaluation of machine translation. In Isabelle, P., Charniak, E. & Lin, D. (eds.) Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318, DOI: 10.3115/1073083.1073135 (Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 2002).
  126. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Goldstein, J., Lavie, A., Lin, C.-Y. & Voss, C. (eds.) Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65–72 (Association for Computational Linguistics, Ann Arbor, Michigan, 2005).
  127. Lin, C.-Y. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, 74–81 (Association for Computational Linguistics, Barcelona, Spain, 2004).
  128. Du, Z. et al. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320–335 (2022).
  129. Zeng, A. et al. Glm-130b: An open bilingual pre-trained model. \JournalTitlearXiv preprint arXiv:2210.02414 (2022).
  130. Do models of mental health based on social media data generalize? In Cohn, T., He, Y. & Liu, Y. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2020, 3774–3788, DOI: 10.18653/v1/2020.findings-emnlp.337 (Association for Computational Linguistics, Online, 2020).
  131. Hua, Y. et al. Using twitter data to understand public perceptions of approved versus off-label use for covid-19-related medications. \JournalTitleJournal of the American Medical Informatics Association 29, 1668–1678 (2022).
  132. 104th United States Congress. Health insurance portability and accountability act of 1996 (1996).
  133. European Parliament & Council of the European Union. Regulation (eu) 2016/679 of the european parliament and of the council (2016).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Yining Hua (23 papers)
  2. Fenglin Liu (54 papers)
  3. Kailai Yang (22 papers)
  4. Zehan Li (26 papers)
  5. Yi-han Sheu (4 papers)
  6. Peilin Zhou (34 papers)
  7. Lauren V. Moran (1 paper)
  8. Sophia Ananiadou (72 papers)
  9. Andrew Beam (9 papers)
  10. Hongbin Na (10 papers)
  11. John Torous (4 papers)
Citations (21)