Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 54 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 22 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 333 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Detection and Positive Reconstruction of Cognitive Distortion sentences: Mandarin Dataset and Evaluation (2405.15334v1)

Published 24 May 2024 in cs.CL and cs.HC

Abstract: This research introduces a Positive Reconstruction Framework based on positive psychology theory. Overcoming negative thoughts can be challenging, our objective is to address and reframe them through a positive reinterpretation. To tackle this challenge, a two-fold approach is necessary: identifying cognitive distortions and suggesting a positively reframed alternative while preserving the original thought's meaning. Recent studies have investigated the application of NLP models in English for each stage of this process. In this study, we emphasize the theoretical foundation for the Positive Reconstruction Framework, grounded in broaden-and-build theory. We provide a shared corpus containing 4001 instances for detecting cognitive distortions and 1900 instances for positive reconstruction in Mandarin. Leveraging recent NLP techniques, including transfer learning, fine-tuning pretrained networks, and prompt engineering, we demonstrate the effectiveness of automated tools for both tasks. In summary, our study contributes to multilingual positive reconstruction, highlighting the effectiveness of NLP in cognitive distortion detection and positive reconstruction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. Aaron T Beck. 1963. Thinking and depression: I. idiosyncratic content and cognitive distortions. Archives of general psychiatry, 9(4):324–333.
  2. Aaron T Beck. 1979. Cognitive therapy and the emotional disorders. Penguin.
  3. David D Burns. 1999. The Feeling Good Handbook: The Groundbreaking Program with Powerful New Techniques and Step-by-Step Exercises to Overcome Depression, Conquer Anxiety, and Enjoy Greater Intimacy. Penguin.
  4. Emphi: Generating empathetic responses with human-like intents. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1063–1074.
  5. A relational agent intervention for adolescents seeking mental health treatment: Protocol for a randomized controlled trial. JMIR Research Protocols, 12(1):e44940.
  6. Pre-training with whole word masking for chinese bert. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3504–3514.
  7. Robyn Mason Dawes. 1964. Cognitive distortion. Psychological Reports, 14(2):443–459.
  8. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335.
  9. Carol S Dweck and David S Yeager. 2019. Mindsets: A view from two eras. Perspectives on Psychological science, 14(3):481–496.
  10. Barbara L Fredrickson. 2001. The role of positive emotions in positive psychology: The broaden-and-build theory of positive emotions. American psychologist, 56(3):218.
  11. Barbara L Fredrickson. 2013. Positive emotions broaden and build. In Advances in experimental social psychology, volume 47, pages 1–53. Elsevier.
  12. Integrating positive psychology into counseling: Why and (when appropriate) how. Journal of Counseling & Development, 85(1):3–13.
  13. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv e-prints, pages arXiv–2304.
  14. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186.
  15. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059.
  16. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81.
  17. Empathy-based communication framework for chatbots: A mental health chatbot application and evaluation. In Proceedings of the 11th International Conference on Human-Agent Interaction, pages 264–272.
  18. Vivian Liu and Lydia B Chilton. 2022. Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–23.
  19. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics.
  20. Powertransformer: Unsupervised controllable revision for biased language correction. pages 7426–7441.
  21. Training models to generate, recognize, and reframe unhelpful thoughts. In The 61st Annual Meeting Of The Association For Computational Linguistics.
  22. Bertalan Meskó. 2023. Prompt engineering as an important emerging skill for medical professionals: tutorial. Journal of Medical Internet Research, 25:e50638.
  23. How long and how much? wait times and costs for initial private child mental health appointments. Journal of Paediatrics and Child Health, 57(4):526–532.
  24. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
  25. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
  26. A therapeutic relational agent for reducing problematic substance use (woebot): development and usability study. Journal of medical Internet research, 23(3):e24850.
  27. Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying lms with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212.
  28. Automatically neutralizing subjective bias in text. In AAAI.
  29. Cognitive distortions, humor styles, and depression. Europe’s journal of psychology, 12(3):348.
  30. A computational approach to understanding empathy expressed in text-based mental health support. In EMNLP.
  31. Cognitive reframing of negative thoughts through human-language model interaction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9977–10000, Toronto, Canada. Association for Computational Linguistics.
  32. Facilitating self-guided mental health interventions through human-language model interaction: A case study of cognitive restructuring. arXiv e-prints, pages arXiv–2310.
  33. Automatic detection and classification of cognitive distortions in mental health text. In 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), pages 275–280. IEEE.
  34. Sagarika Shreevastava and Peter Foltz. 2021. Detecting cognitive distortions from patient-therapist interactions. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 151–158.
  35. SingleCare Team. 2020. Mental health survey 2020. https://www.singlecare.com/blog/news/mental-health-survey/. Accessed: 2023-3-16.
  36. Uddagiri Sirisha and Sai Chandana Bolem. 2022. Aspect based sentiment & emotion analysis with roberta, lstm. International Journal of Advanced Computer Science and Applications, 13(11).
  37. “transforming” delete, retrieve, generate approach for controlled text style transfer. pages 3269–3279.
  38. Psyqa: A chinese dataset for generating long counseling text for mental health support. pages 1489–1503.
  39. Roberta-lstm: a hybrid model for sentiment analysis with transformer and recurrent neural network. IEEE Access, 10:21517–21525.
  40. Automated detection of cognitive distortions in text exchanges between clinicians and people with serious mental illness. Psychiatric services, 74(4):407–410.
  41. Evaluating neural text simplification in the medical domain. In The World Wide Web Conference, pages 3286–3292.
  42. C2d2 dataset: A resource for the cognitive distortion analysis and its impact on mental health. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10149–10160.
  43. WHO. 2021. Depression. https://www.who.int/news-room/fact-sheets/detail/depression. Accessed: 2021-12-16.
  44. WHO. 2023. Who launches commission to foster social connection. https://www.who.int/news/item/15-11-2023-who-launches-commission-to-foster-social-connection. Accessed: 2023-11-23.
  45. The far-reaching effects of believing people can change: implicit theories of personality shape stress, health, and achievement during adolescence. Journal of personality and social psychology, 106(6):867.
  46. Bertscore: Evaluating text generation with bert.
  47. Comae: A multi-factor hierarchical framework for empathetic response generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 813–824.
  48. Multi-party empathetic dialogue generation: A new task for dialog systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 298–307.
  49. Inducing positive perspectives with text reframing. pages 3682–3700.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.