Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Large Language Model with Decomposed Reasoning for Emotion Cause Pair Extraction (2401.17716v1)

Published 31 Jan 2024 in cs.CL

Abstract: Emotion-Cause Pair Extraction (ECPE) involves extracting clause pairs representing emotions and their causes in a document. Existing methods tend to overfit spurious correlations, such as positional bias in existing benchmark datasets, rather than capturing semantic features. Inspired by recent work, we explore leveraging LLM to address ECPE task without additional training. Despite strong capabilities, LLMs suffer from uncontrollable outputs, resulting in mediocre performance. To address this, we introduce chain-of-thought to mimic human cognitive process and propose the Decomposed Emotion-Cause Chain (DECC) framework. Combining inducing inference and logical pruning, DECC guides LLMs to tackle ECPE task. We further enhance the framework by incorporating in-context learning. Experiment results demonstrate the strength of DECC compared to state-of-the-art supervised fine-tuning methods. Finally, we analyze the effectiveness of each component and the robustness of the method in various scenarios, including different LLM bases, rebalanced datasets, and multi-pair extraction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Emotion regulation skills training enhances the efficacy of inpatient cognitive behavioral therapy for major depressive disorder: a randomized controlled trial. Psychotherapy and psychosomatics, 82(4):234–245.
  2. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  3. End-to-end emotion-cause pair extraction with graph convolutional network. In Proceedings of the 28th International Conference on Computational Linguistics, pages 198–207.
  4. Emotion cause detection with linguistic constructions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 179–187.
  5. A unified target-oriented sequence-to-sequence model for emotion-cause pair extraction. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:2779–2791.
  6. A symmetric local search network for emotion-cause pair extraction. In Proceedings of the 28th International Conference on Computational Linguistics, pages 139–149.
  7. Georgiana Craciun and Kelly Moore. 2019. Credibility of negative online product reviews: Reviewer gender, reputation and emotion effects. Computers in Human Behavior, 97:104–115.
  8. Active prompting with chain-of-thought for large language models. arXiv preprint arXiv:2302.12246.
  9. Jiayuan Ding and Mayank Kejriwal. 2020. An experimental study of the effects of position bias on emotion cause extraction. arXiv preprint arXiv:2007.15066.
  10. Ecpe-2d: Emotion-cause pair extraction based on joint two-dimensional representation, interaction and prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3161–3170.
  11. End-to-end emotion-cause pair extraction based on sliding window multi-label learning. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 3574–3583.
  12. A survey for in-context learning. arXiv preprint arXiv:2301.00234.
  13. Multi-task sequence tagging for emotion-cause pair extraction via tag distribution refinement. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:2339–2350.
  14. Susan T Fiske and Shelley E Taylor. 1991. Social cognition. Mcgraw-Hill Book Company.
  15. Complexity-based prompting for multi-step reasoning. arXiv preprint arXiv:2210.00720.
  16. Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors. arXiv preprint arXiv:2305.14450.
  17. Jie Huang and Kevin Chen-Chuan Chang. 2022. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403.
  18. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406.
  19. Pair-based joint encoding with relational graph convolutional networks for emotion-cause pair extraction. arXiv preprint arXiv:2212.01844.
  20. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In The Eleventh International Conference on Learning Representations.
  21. Reasoning with language model prompting: A survey. arXiv preprint arXiv:2212.09597.
  22. Is chatgpt a general-purpose natural language processing task solver? arXiv preprint arXiv:2302.06476.
  23. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992.
  24. Synthetic prompting: Generating chain-of-thought demonstrations for large language models. arXiv preprint arXiv:2302.00618.
  25. Automatic prompt augmentation and selection with chain-of-thought from labeled data. arXiv preprint arXiv:2302.12822.
  26. An end-to-end network for emotion-cause pair extraction. arXiv preprint arXiv:2103.01544.
  27. The essential of sentiment analysis and opinion mining in social media: Introduction and survey of the recent approaches and techniques. In 2019 IEEE 9th symposium on computer applications & industrial electronics (ISCAIE), pages 272–277. IEEE.
  28. Recent trends in deep learning based textual emotion cause extraction. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
  29. Pushing the limits of chatgpt on nlp tasks. arXiv preprint arXiv:2306.09719.
  30. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  31. Revisiting relation extraction in the era of large language models. arXiv preprint arXiv:2305.05003.
  32. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
  33. Is chatgpt a good sentiment analyzer? a preliminary study. arXiv preprint arXiv:2304.04339.
  34. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
  35. Effective inter-clause modeling for end-to-end emotion-cause pair extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3171–3181.
  36. Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. arXiv preprint arXiv:1906.01267.
  37. Two-stage supervised ranking for emotion cause extraction. Knowledge-Based Systems, 228:107225.
  38. A survey of large language models. arXiv preprint arXiv:2303.18223.
  39. Ueca-prompt: Universal prompt for emotion cause analysis. In Proceedings of the 29th International Conference on Computational Linguistics, pages 7031–7041.
  40. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Jialiang Wu (2 papers)
  2. Yi Shen (107 papers)
  3. Ziheng Zhang (43 papers)
  4. Longjun Cai (10 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.