Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recognizing Conditional Causal Relationships about Emotions and Their Corresponding Conditions (2311.16579v1)

Published 28 Nov 2023 in cs.CL

Abstract: The study of causal relationships between emotions and causes in texts has recently received much attention. Most works focus on extracting causally related clauses from documents. However, none of these works has considered that the causal relationships among the extracted emotion and cause clauses can only be valid under some specific context clauses. To highlight the context in such special causal relationships, we propose a new task to determine whether or not an input pair of emotion and cause has a valid causal relationship under different contexts and extract the specific context clauses that participate in the causal relationship. Since the task is new for which no existing dataset is available, we conduct manual annotation on a benchmark dataset to obtain the labels for our tasks and the annotations of each context clause's type that can also be used in some other applications. We adopt negative sampling to construct the final dataset to balance the number of documents with and without causal relationships. Based on the constructed dataset, we propose an end-to-end multi-task framework, where we design two novel and general modules to handle the two goals of our task. Specifically, we propose a context masking module to extract the context clauses participating in the causal relationships. We propose a prediction aggregation module to fine-tune the prediction results according to whether the input emotion and causes depend on specific context clauses. Results of extensive comparative experiments and ablation studies demonstrate the effectiveness and generality of our proposed framework.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. L. Gui, J. Hu, Y. He, R. Xu, Q. Lu, and J. Du, “A question answering approach for emotion cause extraction,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Sep. 2017, pp. 1593–1602.
  2. X. Li, K. Song, S. Feng, D. Wang, and Y. Zhang, “A co-attention neural network model for emotion cause analysis with emotional context awareness,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Oct.-Nov. 2018, pp. 4752–4757.
  3. R. Xia, M. Zhang, and Z. Ding, “Rthn: A rnn-transformer hierarchical network for emotion cause extraction,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, ser. IJCAI’19, 2019, pp. 5285–5291.
  4. Z. Ding, H. He, M. Zhang, and R. Xia, “From independent prediction to reordered prediction: Integrating relative position and global label information to emotion cause identification,” in The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, 2019, pp. 6343–6350.
  5. S. Y. M. Lee, Y. Chen, and C.-R. Huang, “A text-driven rule-based system for emotion cause detection,” in Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, Jun. 2010, pp. 45–53.
  6. Y. Chen, S. Y. M. Lee, S. Li, and C.-R. Huang, “Emotion cause detection with linguistic constructions,” in Proceedings of the 23rd International Conference on Computational Linguistics, ser. COLING ’10, 2010, pp. 179–187.
  7. L. Gui, D. Wu, R. Xu, Q. Lu, and Y. Zhou, “Event-driven emotion cause extraction with corpus construction,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Nov. 2016, pp. 1639–1649.
  8. L. Gui, R. Xu, Q. Lu, D. Wu, and Y. Zhou, “Emotion cause extraction, a challenging task with corpus construction,” in Social Media Processing, Y. Li, G. Xiang, H. Lin, and M. Wang, Eds., 2016, pp. 98–109.
  9. R. Xia and Z. Ding, “Emotion-cause pair extraction: A new task to emotion analysis in texts,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Jul. 2019, pp. 1003–1012.
  10. Y. Chen, W. Hou, X. Cheng, and S. Li, “Joint learning for emotion classification and emotion cause detection,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.   Brussels, Belgium: Association for Computational Linguistics, Oct.-Nov. 2018, pp. 646–651.
  11. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, ser. NIPS’13, 2013, pp. 3111–3119.
  12. X. Chen, Q. Li, and J. Wang, “Conditional causal relationships between emotions and causes in texts,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).   Online: Association for Computational Linguistics, Nov. 2020, pp. 3111–3121.
  13. P. Wei, J. Zhao, and W. Mao, “Effective inter-clause modeling for end-to-end emotion-cause pair extraction,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.   Online: Association for Computational Linguistics, Jul. 2020, pp. 3171–3181.
  14. H. Tang, D. Ji, and Q. Zhou, “Joint multi-level attentional model for emotion detection and emotion-cause pair extraction,” Neurocomputing, vol. 409, pp. 329–340, 2020.
  15. H. Song, C. Zhang, Q. Li, and D. Song, “An end-to-end multi-task learning to link framework for emotion-cause pair extraction,” Computing Research Repository, vol. arXiv:2002.10710, 2020.
  16. M. E. Basiri, S. Nemati, M. Abdar, E. Cambria, and U. R. Acharya, “Abcdm: An attention-based bidirectional cnn-rnn deep model for sentiment analysis,” Future Generation Computer Systems, vol. 115, pp. 279–294, 2021.
  17. X. Li, Z. Li, H. Xie, and Q. Li, “Merging statistical feature via adaptive gate for improved text classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 15, 2021, pp. 13 288–13 296.
  18. Z. Li, H. Xie, G. Cheng, and Q. Li, “Word-level emotion distribution with two schemas for short text emotion classification,” Knowledge-Based Systems, vol. 227, p. 107163, 2021.
  19. Z. Li, X. Chen, H. Xie, Q. Li, X. Tao, and G. Cheng, “Emochannel-sa: exploring emotional dependency towards classification task with self-attention mechanism,” World Wide Web, vol. 24, no. 6, pp. 2049–2070, Nov 2021.
  20. J. Shukla, M. Barreda-Ángeles, J. Oliver, G. C. Nandi, and D. Puig, “Feature extraction and selection for emotion recognition from electrodermal activity,” IEEE Transactions on Affective Computing, vol. 12, no. 4, pp. 857–869, 2021.
  21. M. S. Akhtar, A. Ekbal, and E. Cambria, “How intense are you? predicting intensities of emotions and sentiments using stacked ensemble [application notes],” IEEE Computational Intelligence Magazine, vol. 15, no. 1, pp. 64–75, 2020.
  22. Y. Long, R. Xiang, Q. Lu, C.-R. Huang, and M. Li, “Improving attention model based on cognition grounded data for sentiment analysis,” IEEE Transactions on Affective Computing, vol. 12, no. 4, pp. 900–912, 2021.
  23. E. Cambria, Y. Li, F. Z. Xing, S. Poria, and K. Kwok, “Senticnet 6: Ensemble application of symbolic and subsymbolic ai for sentiment analysis,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, ser. CIKM ’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 105–114.
  24. S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep learning–based text classification: A comprehensive review,” ACM Comput. Surv., vol. 54, no. 3, apr 2021.
  25. C. Kruengkrai, K. Torisawa, C. Hashimoto, J. Kloetzer, J. Oh, and M. Tanaka, “Improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, S. P. Singh and S. Markovitch, Eds.   AAAI Press, 2017, pp. 3466–3473.
  26. H. Kayesh, M. S. Islam, and J. Wang, “On event causality detection in tweets,” Computing Research Repository, vol. arXiv:1901.03526, 2019.
  27. P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu, “Attention-based bidirectional long short-term memory networks for relation classification,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).   Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 207–212.
  28. P. Li and K. Mao, “Knowledge-oriented convolutional neural network for causal relation extraction from natural language texts,” Expert Systems with Applications, vol. 115, pp. 512 – 523, 2019.
  29. W. Wilutzky, “Emotions as pragmatic and epistemic actions,” Frontiers in Psychology, vol. 6, p. 1593, 2015.
  30. S. Marsella, J. Gratch, P. Petta et al., “Computational models of emotion,” A Blueprint for Affective Computing-A sourcebook and manual, vol. 11, no. 1, pp. 21–46, 2010.
  31. D. Jurafsky, “26 pragmatics and computational linguistics,” The handbook of pragmatics, p. 578, 2004.
  32. R. Guo, L. Cheng, J. Li, P. R. Hahn, and H. Liu, “A survey of learning causality with data: Problems and methods,” Computing Research Repository, vol. arXiv:1809.09337, 2018, version 3.
  33. W. Chen, R. Cai, K. Zhang, and Z. Hao, “Causal discovery in linear non-gaussian acyclic model with multiple latent confounders,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–12, 2021.
  34. F. Xie, R. Cai, Y. Zeng, J. Gao, and Z. Hao, “An efficient entropy-based causal discovery method for linear structural equation models with iid noise variables,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 5, pp. 1667–1680, 2020.
  35. F. Liu and L.-W. Chan, “Causal inference on multidimensional data using free probability theory,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 7, pp. 3188–3198, 2018.
  36. X. S. Gu and P. R. Rosenbaum, “Comparison of multivariate matching methods: Structures, distances, and algorithms,” Journal of Computational and Graphical Statistics, vol. 2, no. 4, pp. 405–420, 1993.
  37. P. C. Austin, “An introduction to propensity score methods for reducing the effects of confounding in observational studies,” Multivariate Behavioral Research, vol. 46, no. 3, pp. 399–424, 2011, pMID: 21818162.
  38. J. K. Lunceford and M. Davidian, “Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study,” Statistics in Medicine, vol. 23, no. 19, pp. 2937–2960, 2004.
  39. J. Pearl, “Causal diagrams for empirical research: Discussion of ‘Causal diagrams for empirical research’ by J. Pearl,” Biometrika, vol. 82, no. 4, pp. 689–690, 12 1995.
  40. M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673–2681, 1997.
  41. X. Chen, Q. Li, and J. Wang, “A unified sequence labeling model for emotion cause pair extraction,” in Proceedings of the 28th International Conference on Computational Linguistics.   Barcelona, Spain (Online): International Committee on Computational Linguistics, Dec. 2020, pp. 208–218.
  42. Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A lite bert for self-supervised learning of language representations,” Computing Research Repository, vol. arXiv:1909.11942, 2020.
  43. H. Chen, J. Yang, and X. Chen, “A convolution-based deep learning approach for estimating compressive strength of fiber reinforced concrete at elevated temperatures,” Construction and Building Materials, vol. 313, p. 125437, 2021.
  44. H. Chen, C. L. Chow, and D. Lau, “Deterioration mechanisms and advanced inspection technologies of aluminum windows,” Materials, vol. 15, no. 1, 2022.
  45. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds.   Curran Associates, Inc., 2017, pp. 5998–6008.
  46. Y. You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer, and C.-J. Hsieh, “Large batch optimization for deep learning: Training bert in 76 minutes,” Computing Research Repository, vol. arXiv:1904.00962, 2020.
  47. A. Bruckman, “Studying the amateur artist: A perspective on disguising data collected in human subjects research on the internet,” Ethics and Information Technology, vol. 4, no. 3, pp. 217–231, Sep 2002.
  48. X. Li, Z. Li, X. Luo, H. Xie, X. Lee, Y. Zhao, F. L. Wang, and Q. Li, “Recurrent attention networks for long-text modeling,” in Findings of the Association for Computational Linguistics: ACL 2023.   Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 3006–3019.
  49. Z. Li, X. Li, Y. Liu, H. Xie, J. Li, F.-l. Wang, Q. Li, and X. Zhong, “Label supervised llama finetuning,” arXiv preprint arXiv:2310.01208, 2023.

Summary

We haven't generated a summary for this paper yet.